Monday, 28 December 2015

Issues affecting the reliability and validity of Schizophrenia diagnosis

Hello everyone, sorry it's been a while! Thought I'd finally finish off the last schizophrenia post - diagnostic criteria, and issues affecting diagnostic validity. As usual:

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Diagnostic Criteria


The diagnostic criteria for schizophrenia were separated by Schneider into two sets: "First-rank" and "Second-rank" symptoms, and it was suggested that having one of the 1st rank symptoms means that you are likely to have schizophrenia. The ICD (International Classification of Diseases) still focuses on 1st-rank symptoms, but the DSM (Diagnostic and Statistical Manual of mental disorders) has moved away from them.

First-rank symptoms include delusions (e.g of control, or persecution), auditory hallucinations, and thought disturbances (the belief that either your thoughts are being broadcast for others to hear, or that others are inserting thoughts into your head.)

Second-rank symptoms include disturbances of speech such as fragmentation, interruption, and incoherency, catatonic symptoms such as stupor, mutism and repeated movements, and "negative" symptoms such as apathy, avolition, a flattened range of emotional expression, and a lack of communication.

However, an issue with the concept of first-rank symptoms is that many other conditions such as bipolar disorder share these symptoms, threatening validity of diagnosis - the presence of just one first-rank symptom without any second-ranks to help make a more specific diagnosis could lead to a bipolar patient being incorrectly diagnosed with schizophrenia instead, under the ICD's diagnostic criteria.


Diagnostic Issues


Due to the two different classifications for diagnosis, criterion validity is an issue - the extent to which two different diagnostic systems agree. If schizophrenia can be diagnosed using one but not the other, either could be a potentially invalid diagnosis - one must be incorrect.

The DSM-V diagnoses schizophrenia on 5 axis: 1 and 2 for symptoms, 3 for medical conditions, 4 for social conditions, 5 for state of function. It no longer differentiates between subtypes of schizophrenia, as these are unreliable due to symptom overlap between subtypes and prominent symptom changes. To be diagnosed, a patient must have two of the criteria, one of which must be first-rank. The symptoms must be present for the last 6 months, and active for at least one.

The ICD-10 differentiates between 7 subtypes of schizophrenia, such as catatonic, paranoid, residual and undifferentiated, based on most prominent symptoms. To be diagnosed under the ICD, the patient must display one first-rank symptom, or two second-rank symptoms. It places much more emphasis on first-rank symptoms by allowing a patient to be diagnosed based on the presence of just one of them.

Comorbidity is another potential problem with schizophrenia diagnosis - the presence of one or more additional disorders occurring alongside schizophrenia makes it difficult to identify which disorder is the cause of a specific symptom, making it harder to diagnoses the correct disorders and treat them accordingly and reducing diagnostic validity.

Buckley et al (2009) analysed medical records of their schizophrenic patients, and found that 50% had depression, 47% had substance abuse disorders, 29% had PTSD, and 15% had panic or anxiety disorders. Bottes (2009) carried out a similar analysis for the Psychiatric Times, and found that 26% of schizophrenic patients had OCD, 52% had obsessive compulsive symptoms. These studies suggest that many schizophrenics have multiple disorders, and this should be taken into consideration when diagnosing patients in order to avoid an invalid diagnosis.

Contrastingly, symptom overlap can also invalidate diagnosis - many disorders share symptoms with schizophrenia, meaning that the wrong disorder could be diagnosed based on a specific, shared symptom.

Zorumski and Rubin (2013) found that bipolar disorder's most prominent symptom, severe episodes of mania, often includes delusions, hallucinations and catatonia - all symptoms that could lead to a bipolar sufferer being incorrectly diagnosed with schizophrenia.

Marsha et al (1995) found that 32% of bipolar sufferers showed 1st-rank symptoms of schizophrenia, and Carpenter (1974) found that 16% of depression sufferers showed them, suggesting that there is a clear symptom overlap between the conditions that could affect validity of diagnosis. 

Ross (1998) found that the more 1st-rank symptoms a patient displays, the more likely they are to be diagnosed with multiple personality disorder rather than schizophrenia, suggesting that 1st-rank symptoms are an indicator of MPD rather than schizophrenia specifically.


Reliability


Reliability is the consistency of diagnosis, measured by inter-rater reliability, internal consistency (if multiple patients with the same symptoms will be diagnosed in the same way) and test-retest consistency.

Several factors can reduce reliability of schizophrenia diagnosis. The main diagnostic method is the clinical interview, and individual differences between clinicians will mean different responses according to the age, personality, aptitude of the rater, as well as the bond of trust between the psychologist and the patient. These all reduce inter-rater reliability. The severity of symptoms at the time of diagnosis can affect the test-retest consistency, and deception can affect general reliability, as evidenced by Rosenhan's 1973 study.

In Rosenhan's 1973 study, 8 psychologists turned up at mental hospitals claiming to have experienced auditory hallucinations. Upon admission for schizophrenia, they stopped presenting symptoms, and were kept in for 7-52 days despite no further schizophrenic symptoms, and were given over 2000 doses of antipsychotic drugs between them. A followup on this told a mental hospital to expect pseudopatients over the next 3 months - 83 out of 193 patients were suspected of being pseudopatients by at least 1 medical professional, but they were all actually genuine.

Although based on the DSM-II criteria at the time of at least 1 auditory hallucination, the diagnoses were both reliable and valid, the admissions only showing an inability to recognise a lie, Rosenhan claimed from these results that psychiatrists could not make a consistent and accurate diagnosis. He suggested that patients' behaviour was viewed through their label of mental illness - the original set of pseudopatient psychologists' behaviour was pathologised, with note-taking observations being pathologised as "writing behaviour".

Rosenhan's conclusions lack temporal validity - the modern version of the DSM has been revised to have more stringent diagnostic criteria. Symptoms must now be present for 6 months and active for 1 month in order for schizophrenia to be diagnosed.

Once institutionalised, the participants were passive, not normal - passively keeping up the deception rather than admitting to their lie, and the relative speed of release in light of this could actually have suggested competent and efficient mental health staff.

Rosenhan's suggestion that psychiatrists cannot reliably and accurately diagnose has been challenged by: 

Jakobsen (2005) who tested 100 Danish schizophrenic patients and assessed them based on their case notes, coming to a correct diagnosis for 98% of them.

Hollis (2000) who used the DSM-IV and case notes to correctly assess and diagnose 100% of a sample of schizophrenic patients. 

These results suggest that mental health professionals are much better at diagnosing schizophrenia nowadays, and this part of Rosenhan's conclusion lacks temporal validity.



Cultural bias in diagnosis


Emic constructs are behaviours or norms that only apply to a certain number of cultures, whereas etic constructs apply globally. When an etic construct is considered to be a universal norm, this is imposing an etic. Eurocentrism leads to imposed etics in the diagnosis of schizophrenia, as the imposed etic of the DSM is used to apply a western, subjective idea of perfect mental health to non-western cultures. If people from one culture are assessing people from another, behaviour can be misconstrued, leading to an invalid diagnosis.


Cultural difference could help to explain the higher rates of schizophrenia diagnosis in some ethnic minorities. If someone is uneasy talking to a psychologist of a different ethnicity to them, they may show withdrawal, alogia, and a lack of eye contact - all of which could be interpreted as symptoms of schizophrenia, reducing diagnostic validity.

Cochrane (1977) found that schizophrenia rates in the UK and in the West Indies are very similar, and close to 1%, but people in the UK of Afro-Caribbean origin are 7 times more likely to be diagnosed with schizophrenia than those of white European ethnicity. Migration stress and socioeconomic factors were ruled out, as other ethnic groups such as South Asian that migrated at a similar time are no more likely to be diagnosed, suggesting that there is a bias that leads to racial overdiagnosis of Afro-Caribbean people.

Harrison (1997) supports the temporal validity of Cochrane's earlier research, finding that 20 years later, the gap in racial diagnosis rates had widened - Afro-Caribbean patients were now 8 times more likely to be diagnosed with schizophrenia. 

Stowell-Smith and McKeown (1999) carried out a discourse analysis of psychiatrists' reports on 18 white and 18 black psychopaths, and found that with black psychopaths there was more emphasis on aggression and potential threat to society, compared to a greater emphasis on trauma and emotional state with white psychopaths. This suggests a racial bias in the area of psychopathology that leads to biased reporting of symptoms, calling diagnostic reliability into question. 

Read (1970) gave 194 UK and 134 US psychiatrists a case report and asked them to come to a diagnosis from it. 69% of US and 2% of UK psychiatrists diagnosed schizophrenia from it, suggesting large cultural differences in behaviour interpretation and diagnostic criteria. 

Neki (1973) studied the prevalence of catatonic schizophrenia among schizophrenics in the UK and India, and found the rate was 44% in India but only 4% in the UK, again suggesting large cultural variations in behaviour interpretation and classification.

Friday, 11 December 2015

Theories of relationship maintenance

The investment model can be used as AO2 with which to evaluate either of the other theories, as it is long-term and looks at past and future commitments in a relationship, rather than focusing solely on short-term cost and reward.


Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red- AO2 - Evaluation - evaluative points/IDAs


Social Exchange Theory (SET)


Proposed by Thibaut + Kelley, this economic theory views relationship behaviour as a series of exchanges, and suggests that everyone is innately selfish, looking for the most profitable relationship that offers the most reward for the least cost. Rewards include emotional fulfilment, sex, and companionship. The theory suggests that people will only stay in a relationship if it the rewards outweigh the costs in terms of time, effort and finances. Therefore, commitment to a relationship is dependent on its profitability - we assign behaviours a subconscious numerical value, either positive or negative, indicative of their status and magnitude as either a cost or a reward.

Thibaut and Kelley also proposed that we have "comparison levels" (CL) that form the standards against which we judge our own relationships. Media, parents, family, peers and ex-partners all function as these points of comparison with which we weigh up the costs and benefits of our relationship, as well as looking to internal schemas of how a relationship should be in order to come to a judgement about the value of our relationship. Alternative comparison levels (CL Alts) are other prospective relationships with which we compare our own, evaluating the costs and benefits of leaving our partner and forming a new relationship - if the benefits of the alternative relationship are better than our current one, we are more likely to leave and start a new one. If the benefits of our CL Alts are not as good, we will stay in our current relationship.

Mills and Clark (1980) provided conflicting evidence for social exchange theory with their identification of two types of romantic relationship - the "communal couple", giving out of altruism and concern for their partners, and the "exchange couple", where each keep mental records of who is ahead and who is behind in terms of social exchange. Their suggestion that there are two types of couple challenges the degree to which SET can be applied to real-world relationships - SET only really explains the relationship dynamic between the exchange couple, not the communal couple.

Hatfield (1979) provided further evidence that challenged the validity of SET. Looking at people in romantic relationships who felt over or under-benefitted, they found that those who gave more than they received felt angry and deprived, whereas those who received more than they gave felt guilty and uncomfortable. This challenges the theory that both partners of the relationship are intrinsically selfish and aiming for maximum reward - even though the over-benefitted were getting much more out of the relationship, they felt uncomfortable and unhappy because of this, and sought to equalise the balance.

However, research by Rusbult (1983) supports the central concepts of SET. Participants completed questionnaires over a 7-month period concerning rewards and costs associated with relationships. SET did not explain the early "honeymoon phase" of a relationship where balance of exchanges was ignored, but later on, relationship costs and benefits were significantly correlated with the degree of satisfaction, suggesting that this theory can help explain maintenance of long-term relationships quite well.

An issue with SET is that it could be considered overly reductionist, seeking to explain one of the most complex human behaviours as the result of a series of simple cost/reward analyses. It focuses only on the relationship in the present, ignoring past events and future rewards and commitments, oversimplifying the process of relationship maintenance in an attempt to numerically quantify different aspects of relationship behaviour. A more holistic explanation that takes into account factors such individual differences such as the degree to which someone desires a "profitable" relationship rather than an equal one might better explain relationship maintenance in economic terms.

Another issue with SET is that it suffers cultural bias through ethnocentricity, seeking to globally apply the emic construct of desire for individual reward, imposing it as an etic. Western, individualist cultures such as those of the UK and the USA are likely to place more emphasis on the advancement of the individual in society than Eastern, collectivist cultures such as China, which are more likely to emphasise communal interest rather than individual gain. Therefore, this theory cannot necessarily be applied on a cross-cultural level, limiting its application.

A problem with SET is that it relies on two key assumptio ns: firstly, that people constantly monitor their relationship's costs/rewards and compare them with alternative relationships. However, research has suggested that it is not until dissatisfaction with the relationship that people weigh up costs/rewards and compare them to CL Alts, so this theory may be more applicable to the breakdown of relationships than it is to maintenance. Secondly, it assumes that everyone is intrinsically selfish, motivated purely through a desire for personal gain, when this may not be true - Sedikides (2005) suggested that most people are unselfish, doing things for others without expecting anything in reward.


Equity Theory


Another economic theory, this one challenges the suggestion that each partner in the relationship is only aiming for personal rewards, suggesting that fairness is more important than profit. It claims that the person who gets less in a relationship feels dissatisfied, and the person who gets more feels guilty and uncomfortable. CL and CL Alts are still valid - comparing the relationship to schemas or alternatives that might offer a fairer deal.

Walster et al suggested a 4 stage model of equity. 
  • People try to maximise their profit in the relationship.
  • Trading rewards occurs to bring about fairness - e.g. a favour or privilege is repaid by the partner.
  • Inequality occurs, producing dissatisfaction - the partner who receives less experiences a greater degree of dissatisfaction.
  • The loser endeavours to rectify the situation and bring about equity - the greater the perceived inequity, the greater the effort to equalise.
Stafford and Canary (2006) provide supporting evidence for equity theory. Asking 200 couples to complete measures of relationship equity and marital satisfaction. Satisfaction was highest in couples who perceived their relationships to equitable, and lowest for partners who considered themselves to be relatively under-benefited by their relationship. The findings are consistent with the key principles of equity theory - that people are most satisfied in a relationship where the balance of rewards/costs are fairly even and consistent. 

However, research does not support the assumption that equity is equally important in all cultures. Aumer-Ryan et al interviewed men and women in Hawaiian (individualist) and Jamaican (collectivist) universities, and found equity to be less important in Jamaican relationships. This suggests that the theory is culturally biased and cannot be applied equally to both individualist and collectivist cultures, and seeks to impose desire for equity as an etic construct rather than the emic that it actually is.

This theory has real-world application to marital therapy. Attempts to resolve compatibility issues between spouses require issues associated with inequity dissatisfaction to be resolved first, because inequity indicates incompatibility in women's eyes. In research, wives reported lower levels of compatibility than husbands when the relationship was inequitable - suggesting that there are gender differences in how equity is perceived, and the theory is not equally applicable to either gender.

Another issue with this theory is that it assumes everybody wants equality. This is not always the case - as some partners may be perfectly happy to give more than they receive in a relationship without feeling dissatisfied, suggesting equity theory cannot fully explain every type of relationship.  


Investment Model


The final economic theory of relationship maintenance is based around long-term return on investments, looking for the best possible outcome. The number and importance of  long-term investments decides whether a relationship will be maintained or whether it will break down. Investments such as houses, children, time, holidays, and assets serve as barriers to dissolution. Commitment to staying in a relationship is based on three factors:
  • Satisfaction: feeling that the rewards it provides are unique
  • A belief that the relationship offers better rewards than any CL Alts.
  • Substantial investments in the relationship.
Impett et al provide supporting evidence for the investment model. Testing the model using a prospective study of married couples over 18 months, they found that the commitment to the marriage by both partners predicted relationship stability and success, suggesting that substantial investment in relationships helps to maintain and steady them.

Jerstad provides further supporting evidence - he found that investments, most notably time and effort put into the relationship were the best predictor of whether or not somebody would stay with a violent partner. Those who had experience the most violence were often the most committed.

The investment model is more long-term than the other economic theories of relationship maintenance, looking at past and future commitments in a relationship, rather than focusing solely on short-term cost and reward analysis.

However, an issue with the investment model is that it reflects an ethnocentric bias as an explanation of relationship maintenance. Cross-culturally, satisfaction, quality of CL Alts, and investment are not always factors that influence commitment .There may be cultural or religious pressures to stay in an unsatisfactory relationship, and in some cultures, relationship break up, especially of a marriage, is not socially acceptable. Alternatively, some cultures may have more a stigma towards one gender initiating a breakup than the other.

Monday, 30 November 2015

Theories of relationship formation

Black - AO1 - Description
Blue - AO2 - Evaluation - studies
Red - AO2 – Evaluation - evaluative points/IDAs

Filter Theory


Kerckhoff and Davis proposed the filter theory, suggesting that relationship formation is based on systematic filtration of possible partners on three levels – starting from a "field of availables."

1 – Social demographic variables. Subconsciously, we filter down to a pool of people belonging to similar social demographics to us – same school, town, workplace etc. Individual characteristics play a very small role at this stage.

2 – Similarity of attitudes and values. Here, the pool is filtered based on the law of attraction – greater similarity brings better communication and a better chance of relationship developing further. Having similar hobbies, beliefs, and interests increases the chance that a relationship will develop further and more deeply.

3 – Complementarity of emotional needs. Once a couple is established in a fairly long term relationship, the relationship will develop for better or for worse depending on how well they fit together as a couple and mutually satisfy their needs. Similarities in the amount of emotional intimacy, sex, social interaction and physical proximity required increases the chance that the relationship will be successful in the long-term.

Kerckhoff and Davis provide supporting evidence for filter theory with their longitudinal study of student couples together for less or more than 18 months. Attitude similarity was the most important factor up until 18 months, after this, psychological compatibility and ability to mutually meet needs was the most important factor in determining the quality of the relationship.

The theory can be considered to have a degree of face validity, as it is common sense to assume that similarities in demographic factors, attitudes and values systems would lead to a more happy and successful relationship and would thus be filters that we apply in the selection process.

Spreecher challenged this hypothesis, suggesting that social variables are not the only initial filter, and that couples matched in physical attractiveness, social background and interests were more likely to develop a successful relationship.  This is supported by Murstein’s match hypothesis, which suggests that a significant factor in early attraction is the couple being of similar attractiveness levels – though people may desire the most physically attractive partner, they know in reality they are unlikely to get or to keep them, so they look for people of a similar attractiveness to themselves.

Gruber-Baldini et al (1995) carried out a longitudinal study of couples aged 21 and found that those who were similar in educational level and age at the start of the relationship were more likely to stay together and have a successful relationship, suggesting these two as factors ignored by Kerschoff and Davis in their Filter Theory.

An issue with filter theory is that it could be considered to be overly deterministic, failing to capture the dynamic and fluid nature of human relationships by its division into three distinct stages, and failing to take into account the role of free will in partner selection.  Not all couples will have the same priorities in their relationships at exactly the same stages, and to suggest so is too nomothetic, ignoring individual differences between couples. 

Another issue with filter theory is that it could also be considered to be overly reductionist, seeking to explain the complex nature of relationship behaviour as a result of simple filtration processes, selecting a partner through a process of elimination from a “field of availables.” This is potentially an oversimplification of relationship formation, and cannot definitively explain the formation of homosexual romantic relationships. Homosexual couples may not necessarily have the same experiences that lead to their relationship being initiated as heterosexual couples, so the theory could be considered to have a heterosexist bias.


Reward/Need Satisfaction Theory


Reward/Need satisfaction theory suggests in order to progress from early attraction, the two people need sufficient motivation to want to continue getting to know each other. Long-term relationships are more likely to be formed if the partners meet each others' needs, providing rewards in the form of fulfilment of a range of needs - including biologically based needs such as sex and emotional needs such as giving and receiving emotional support, and feeling a sense of belonging.

This theory works on two key principles of the behavioural approach: operant conditioning and classical conditioning. Through classical conditioning, doing activities you enjoy with your partner leads to a conditioned response of happiness to the conditioned stimulus of your partner, leading to an intrinsic feeling of happiness while being around your partner. Through operant conditioning, the sense of belonging and the fulfillment of emotional needs such as intimacy function as rewards in the positive reinforcement process, leading to a strengthened relationship between the couple - they will like each other more and want to spend more time together.

Supporting evidence for this theory comes from Argyll's explanation that relationship formation works as a means to the satisfaction of motivational systems. Argyll (1994) outlined several key motivational systems underpinning social behaviour, and explained how relationship formation satisfies several social needs, namely: Biological needs - collective eating, sex, Dependency - being comforted, Affiliation - a sense of belonging, and Self-esteem. These results support the theory of attraction developing around need fulfilment, with partners acting as means to the fulfilment of certain social needs. 

Further supporting evidence for reward/need satisfaction theory comes from Aron et al (2005), who gave 17 participants who reported being "intensely in love" MRI scans, finding that dopamine-rich areas of the brain showed much more activation when the participant was shown a photo of the person with whom they had fallen in love, in contrast to someone they just liked. The amount of dopaminergic activity was positively correlated with the degree to which they felt in love. This supports the role of operant conditioning as a component of reward/need satisfaction theory - just seeing the person they loved stimulated the release of dopamine in the brain's reward circuits.

In some ways, this study treated psychology as a science by using MRI equipment to give objective, scientific results, measuring the activation of dopamine reward circuitry in the brain. This gives the study a high degree of internal validity, providing an objective measurement of the brain activity it claims to measure. However, in some aspects this study was less scientific - the self-reported description of "very much in love" cannot be scientifically and objectively verified, and is not falsifiable.

A study by May and Hamilton (1980) also supports reward/need satisfaction theory, providing supporting evidence for the role of classical conditioning. Female participants evaluated photographs of men while listening to either rock music that stimulated a positive mood, to music that stimulated a negative mood, or no music at all. The participants gave much more positive evaluations of personal character, physical attractiveness and general attraction in the rock music condition than in the other two, suggesting that an association had formed between the positive feeling from the music, and the men they were evaluating.

Attraction does not necessarily equal formation of relationships - this theory ignores matching and the opportunity to meet. These studies and the theory only explain how a relationship develops once there is already a degree of mutual attraction between two people, not how this initial attraction develops, so they do not fully explain the initiation of relationships.

An issue with May and Hamilton's study is that it was only carried out on female participants. Research and evolutionary theories into sexual selection suggest that attraction develops differently in males and females, so to generalise from males to females without taking these potential gender differences into account would be beta gender bias, and likely to be an inaccurate generalisation. Research has suggested that males prioritise physical appearance in a partner, while females prioritise status and power - these differences in priorities must be taken into account.

Another issue with reward/satisfaction theory is that it is overly environmentally reductionist, explaining a complex human behaviour to be a result of simple behavioural learning based around reward mechanisms. The theory ignores social, cognitive and biological factors that could play a role in attraction, such as the social demographic variables described by Kerckhoff and Davis in their filter theory.

Thursday, 12 November 2015

Disruption of biological rhythms

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs

Shift work


Normally, exogenous zeitgebers change gradually, such as the changing light levels around the year. However, with shift work and jet lag, this change is rapid, and exogenous zeitgebers become desynchronised with endogenous pacemakers. For animals, this could lead to dangerous situations such as an animal leaving their dens at night when dangerous predators are around. In humans, the lack of synchrony may lead to health problems such as gastrointestinal disorders.

Shift workers are required to be alert at night and must sleep in the day, contrary to our natural diurnal lifestyle, and out of synchronisation with available cues from zeitgebers. Night workers experience a "circadian trough" - a period of decreased alertness and body temperature between 12 a.m. and 4 a.m. during their shifts, triggered by a decrease in the stress hormone cortisol. They may also experience sleep deprivation due to being unable to sleep during the day, as daytime sleep is shorter than natural night-time sleep, and more likely to be interrupted.

Czeisler (1982) studied workers at a Utah chemical plant as they adjusted from the traditional backwards shift rotation to a forwards shift rotation. Workers reported feeling less stressed, with fewer health problems and sleeping difficulties, along with higher productivity.  This was due to the workers undergoing "phase delay", where sleep was delayed to adjust to new EZs, rather than the traditional "phase advance", where sleep time was advanced by sleeping earlier than usual. These results suggest that phase delay is healthier than phase advance, as it is significantly easier to adjust to so carries less risk of circadian rhythm disruption.

Czeisler's findings have valuable real-world applications. For businesses employing shift workers, using a forwards rather than backward shift rotation will increase productivity and reduce the risk of employees making mistakes, as well as improve health due to phase delay being easier for the body's circadian clock to adjust to than phase advance.

Gordon et al (1986) found similar results to Czeisler that support the superiority of forward rotation over backwards rotation. Moving police officers from a backwards to a forwards rotation led to a 30% reduction in sleeping on the job, and a 40% reduction in accidents. Officers reported better sleep and less stress.

Studies suggest that there is a significant relationship between chronic circadian disruption resulting from shift work, and organ disease. Knuttson (1996) found that individuals who worked shifts for more than 15 years were 3 times more likely to develop heart disease than non-shift workers. Martino et al (2008) found a link between shift work and kidney disease, and suggested that kidney disease is a potential hazard for long-term shift workers. However, the use of correlations in these studies means that a direct cause and effect cannot be established, and there is not enough evidence to conclude that organ disease is a direct result of shift work - third, intervening variables cannot be ruled out.

The Chernobyl nuclear power plant and the Challenger space shuttle disasters both occurred during night shifts, when performance of workers was most impaired by the circadian trough. The catastrophic nature of these events emphasises the importance that should be placed on healthy shift rotations and the minimising of circadian disruption for workers in order to avoid further disasters.


There are four suggested approaches with which to deal with shift work and its circadian disruption.


  • Permanent non-rotating shift work allows the body clock to synchronise with the new exogenous zeitgebers and adapt to a specific rhythm. However, this is unpopular because not many people want permanent night work.
  • Planned napping during shifts has been shown to reduce tiredness and improve employee performance - but this is unpopular with both employees and employers.
  • Improved daysleep for night shift workers - keeping bedrooms quiet and dark, avoiding bright light and stimulants such as caffeine. However, this method can be disruptive of family life and lead to its own pressures.
  • Rapid rotation: rotating shift work patterns every two or three days avoids even trying to adjust to new exogenous zeitgebers. However, it also means that most of the time, rhythms are out of synchronisation, and there is controversy over the suggested effectiveness of this tactic.

The majority of shift work studies (Czeisler, Gordon et al, etc.) involve only male participants, so research into this topic is often gender biased, with the results often unrepresentative of females. Sex differences could mean differences in the levels of neurotransmitters such as orexin and serotonin that affect sleep cycles, so circadian disruption may affect males and females differently. This means that it would be beta gender bias to generalise results from men to women without taking these neurochemical differences into account.


Jet Lag


Jet lag is the disruption in circadian rhythms caused by travelling through multiple time zones very quickly by aeroplane, causing endogenous pacemakers to become desynchronised with local exogenous zeitgebers. This can result in a number of problems including fatigue, insomnia, anxiety, immune weakness and gastrointestinal disruption.

Flying west to east causes worse symptoms and a greater degree of circadian disruption than flying east to west, because phase advance is required in order to adjust to EZ changes when flying east, whereas phase delay is required in order to adjust to EZ changes when flying west. Studies into shift work demonstrate that phase delay is easier for the body's circadian clock than phase advance, causing a lesser degree of disruption and impairment.

Three ways of coping with jet lag have been suggested. Melatonin supplements are widely prescribed in the US to restore melatonin levels when jet lag has greatly disrupted circadian rhythms in order to restore the synchronicity between the internal clock (EPs) and EZs. Planning sleep patterns beforehand has been shown to help adjustment - if arriving in the daytime, stay awake on the plane, if arriving at nighttime, sleep on the plane. Splitting the travel into two days can also help, as each disruption is less severe and people have to make a less significant adjustment on the day of arrival.

Cho (2001) found that airline staff who regularly travelled across 7 time zones had a reduction in temporal lobe size and memory function, providing supporting evidence for the idea that chronic disruption of circadian rhythms due to jet-lag has long-term symptoms of cognitive impairment and neurological damage.

Recht, Lew and Schwartz (1995) provide supporting evidence for the idea that circadian disruption from jet lag is less severe when travelling from east to west, rather than west to east. They studied US baseball teams travelling between time zones for 3 years, and found that teams travelling east to west won 44% of games, whereas teams travelling west to east won only 37%. Although it could be that some teams were simply better than others, the length and sample size of the study means that this should even out. This suggests that phase delay (westward-bound) is easier for the circadian clock to adjust to than phase advance (eastward-bound.)

A significant issue with this study is gender bias, considering that it only studied male participants. Research has suggested that hormonal and neurological differences between males and females can influence sleep behaviour and, by extension, circadian rhythms, so results from this study may not apply to females too - they could be differently affected by circadian disruption resulting from jet lag. It is beta gender bias to generalise these results to both genders without taking potential hormonal differences between genders into account.  

Coren (1996) suggested several real-world applications of research into circadian disruption that can help reduce the severity of jet lag's circadian disruption. Firstly, sleep well before travelling - this will help avoid sleep deprivation. Secondly, avoid stimulants and depressants such as alcohol and caffeine - they will make the symptoms worse by further disrupting endogenous pacemakers. Thirdly, immediately adjust to local exogenous zeitgebers upon arrival - going out into the morning daylight as soon as possible helps resynchronise due to light's function as an EZ. Finally, adjust flight behaviour in anticipation; sleep if you're arriving at night, stay awake if you're arriving in the day.


Theories on the function of sleep

In the exam, you can be asked a 24-marker specifically on either restoration or evolutionary theories, so it important to know both of these in equal depth and breadth.

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Evolutionary theories of sleep


Evolutionary theories explain sleep as an adaptive behaviour - one that increases the chance of an organism's survival and reproduction, providing a selective advantage. Sleep has evolved as an essential behaviour due to this selective advantage it has provided over the course of our evolutionary history - animals who did not sleep were more likely to fall victim to predation, so could not go on to reproduce.

Meddis proposed the predator-prey status theory, claiming that sleep evolved to keep prey hidden and safe from predators when normal adaptive activities such as foraging are impossible - such as at night for diurnal animals, and in the day for nocturnal animals. Therefore, the hours of sleep required are related to an animal's need for and method of obtaining food, as well as their exposure to predators. Factors other than predator-prey status that can affect sleep behavior include sleeping environment and foraging requirements. Sleep evolved to ensure animals stay still and out of the way of predators when productive activities are impossible, so the higher the vulnerability to predation, the safer the sleep site, and the lesser the time required to spend foraging, the more time an animal should spend sleeping.

This explanation is supported by the fact that animals are often inconspicuous when sleeping - taking the time beforehand to find themselves adequate shelter to keep them hidden from predators. This also explains the early stages of the sleep cycle, "light sleep", as a transitional phase from wake to sleep, allowing the animal to ensure their own safety in their immediate environment before completely losing their alertness.

A study by DeCoursey also supports this explanation. 30 chipmunks had their suprachiasmatic nuclei (a part of the brain involved in regulation of the sleep/wake cycle) removed, and were released into the wild. All 30 chipmunks were killed by predators within 80 days, suggesting that sleep patterns are vital in ensuring the safety of an animal in its natural habitat.

A strength of DeCoursey's study was the scientific validity provided by the use of control groups, treating psychology with rigorous scientific methodology. Three groups of chipmunks were used: one with SCN damage, one who had brain surgery but no SCN damage (to control for the stress of brain surgery) and a healthy control group. The use of these controls mean that cause and effect can easily be determined - it can be reliably established that circadian disruption due to SCN damage increase the risk of death due to predation.

However, a study by Allison and Cicchetti challenges this explanation, finding that on average, prey sleep for fewer hours a night than predators - Meddis suggested the opposite trend, so his theory conflicts with these results.

The predator-prey status theory of sleep is holistic, compared to Webb's hibernation theory. Rather than only focusing on one factor, (status), Meddis suggested that several factors other than this can influence sleep behaviour, such as site of sleep (whether it's enclosed in a nest or a cave, or exposed on prairies or plains) and foraging requirements (whether it requires lots of grazing on nutrient-poor found sources, or relatively few hours gathering nutrient-rich foods such as nuts or insects.) A holistic theory that takes into account multiple factors is likely to be able to provide the best explanation for the complex behaviour that is sleep.

A problem with explaining sleep as a means to safety from predation is that many species may actually be far more vulnerable during sleep, and it would be safer to remain quiet and still yet alert. However, some species have adapted to this need for vigilance: porpoises only sleep one brain hemisphere at a time, while mallards sleep with one eye open to be able to see potential threats. The phenomenon of snoring also challenges this explanation, as it is likely to draw attention to the otherwise inconspicuous sleeping animal, and increase their risk of predation.

Webb proposed the hibernation theory, claiming that sleep evolved as a way of conserving energy when hunting or foraging were impossible. This theory suggests that animals should sleep for longer if they have a higher metabolic rate, as they burn up energy more quickly, so are in greater need of energy conservation.  Conservation of energy is best carried out by limiting the brain's sensory inputs, i.e. sleep.

Berger and Philips found that sleep deprivation causes increased energy expenditure, especially under bed rest conditions. This suggests that sleep does conserve energy, and is especially useful when you're not doing normal activities.

Studies have found a positive correlation between metabolic rate and required sleep duration - small animals such as mice generally sleep for longer than larger animals, supporting the idea that sleep is adaptive as a form of energy conservation.

In times of hardship, such as when food is scarce or the weather too cold, animals sleep for longer, suggesting that sleep helps them conserve all the energy they can when resources are scarce and every calorie is critical for survival.

However, not all organisms follow this general trend, and there are some extreme outliers that challenge this theory. The sloth, a relatively large animal with a slow metabolic rate sleeps for approximately 20 hours a day, challenging the general trend that Webb's theory.

REM sleep, characterised by high levels of brain activity, actually uses the same amount of energy as waking. If REM sleep did not serve some other purpose, it would be maladaptive, as it does not help conserve energy due to the high levels of brain activity.


Overall evaluation of evolutionary theories of sleep


Mukhametov (1984) found that bottlenose dolphins sleep with one cerebral hemisphere asleep at a time, allowing them to be asleep yet alert and moving simultaneously. This adaptation supports the theory that sleep behaviour adapts to suit their environment and better resist selection pressures, lending credibility to the approach.

Generally, evolutionary theories of sleep are holistic, looking at the entire lifestyle of an animal rather than single factors in an attempt to explain and predict sleeping behaviour. Holistic, complex theories are likely to be able to provide the best full explanation for the complex behaviour that is sleep.

Much of the evidence in support of this approach is based on observation of captive animals, rather than animals in the wild. It may not accurate reflect natural animal behaviour, so these studies may lack validity. Also, these theories are impossible to test through experiment or observation, as evolution happens over thousands of years, so it is not particularly scientific.

Finally, this approach is overly deterministic, seeing sleep behaviour in humans as well as animals as being entirely caused by our evolutionary past, with no role for free will. This is an oversimplification - there is evidence to suggest that free will can play a role in influencing biological processes such as sleep.


Restoration theories of sleep


Restoration theories explain the physiological patterns associated with sleep as produced by the body's natural recovery processes. Oswald explained NREM sleep as responsible for the body's regeneration, restoring skin cells due to the release of the body's growth hormone during deep sleep. He suggested that REM sleep restores the brain.

Oswald's theory is supported by the findings that newborn babies spend large amounts of time in proto-REM sleep (a third of every day.)  This is a time of massive brain growth, with the development of new synaptic connections requiring neuronal growth and neurotransmitter production. REM is a very active phase of sleep, with brain energy consumption similar to waking, so Oswald's theory can explain this phase and why it's so dominant in newborns.

Oswald also found that sufferers of severe brain trauma such as drug overdoses spend much more time in REM sleep. It was also known that new skin cells regenerate faster during sleep - Oswald used these results to conclude that REM sleep is for restoration of the brain, and NREM sleep is for restoration of the body.

Jouvet (1967) placed cats on upturned flowerpots surrounded by water, which they would fall into upon entering REM sleep. Over time, the cats became conditioned to wake up upon entering REM sleep, depriving them of the vital fifth stage of sleep. The cats became mentally disturbed very quickly, and died after an average of 35 days. This supports Oswald's theory: the cats had NREM sleep and suffered no obvious physical ailments, buts died from organ failure brought on by brain fatigue, resulting from the lack of REM sleep. 

Jouvet's use of non-human animals raises an important issue. As well as being potentially considered unethical due to the extreme cruelty inflicted upon the animals for relatively little in the way of socially important results, the use of cats is a problem due to physiological differences in the mechanisms controlling sleep in humans and cats, meaning that it is anthropomorphic to generalise the results to humans.

Horne's restoration theory suggests that REM and deep NREM sleep are essential for normal brain function, as the brain restores itself in these stages of "core sleep." Light NREM has no obvious function - Horne refers to it as optional sleep, that might have had a role in keeping the animal inconspicuous by ensuring safety before its progression to deep sleep. Entering NREM causes a surge in growth hormone release - but this is unlikely to be used for tissue growth and repair, as nutrients required will have already have been used. He therefore theorises that bodily restoration takes place in hours of relaxed wakefulness during the day, when energy expenditure is low and nutrients are readily available.

Supporting evidence for Horne's theory comes from  sleep-deprived participants given cognitive tasks to carry out. They can only maintain reasonable performance through significantly increased effort, suggesting that sleep deprivation causes cognitive impairment because the brain has not had enough sleep necessary to maintain prime cognitive function.

Radio DJ Peter Tripp managed to stay awake for 8 days (200 hours). During this time he suffered delusions and hallucinations so severe it was impossible to test his psychological functioning. It is thought that sleep deprivation caused these effects as the brain was unable to restore itself. This supports Horne's theory, as having no REM or NREM lead to cognitive disturbances, rather than any physical impairment.

Randy Gardner remained awake for 11 days (264) hours, suffering from slurred speech, blurred vision and paranoia. He had fewer symptoms than Tripp despite being awake for longer, and soon managed to adjust back to his usual sleep pattern after the experiment. This again supports Horne's theory - slurred speech, paranoia and blurred vision are likely to be a result of neurological rather than physical impairment due to lack of core sleep.

Both Tripp and Gardner's studies are case studies, meaning they lack generalisability to a wider population. The massive individual differences found between only two case studies suggest that individual differences alone play a large role in how the individual experiences sleep, and how much sleep they need, so individual differences affect sleep too much to draw any valid conclusions from case studies.

Also, Tripp and Gardner were both male. research has shown that hormonal differences and levels can play a large role in determining how the individual experiences sleep, so, taking into account hormonal differences between genders, it would be beta bias to attempt to generalise their results to females specifically.

Finally, a methodological issue in Gardner's study comes from the observation of symptoms like blurred vision. It is difficult to establish whether this has a psychological or physiological cause, as it could either a result of bodily impairment such as a malfunction of the optic nerve, or brain impairment such as occipital lobe malfunction, the part of the brain responsible for visual processing. This makes it difficult to establish what damage was done by the sleep deprivation - physical and mental as Oswald would suggest, or purely mental, as Horne would suggest?

Tuesday, 10 November 2015

Lifespan changes in the nature of sleep

This is part of the "nature of sleep" topic, concerning stages of sleep as well as lifespan changes. However, questions have never specifically asked about stages, and there is a lot more to talk about as well as a lot more AO2 for lifespan changes than there is for the sleep cycle. If people want a specific post on stages, I can do that as well, but this is all you should need as far as I know.


Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Lifespan changes in the nature of sleep



The nature of sleep varies dramatically over the course of the human lifespan, both in terms of the required duration of sleep, and the proportion of the different stages of sleep. The proportion of REM (rapid eye movement) sleep shows an overall decrease throughout the years, whilst the proportion of NREM (non-rapid eye movement) sleep increases.


Floyd et al (2007) reviewed nearly 400 sleep studies, and found that REM sleep decreases by an average of 0.6% a decade. The proportion of REM sleep starts to increase again from about age 70, though this may be due to an overall decline in sleep duration.


Neonates (newborn babies) sleep for over 16 hours a day over several sleep periods. After birth, they display "active sleep" - an immature form of REM sleep showing high brain activity, and "quiet sleep"- an immature form of slow-wave (deep) sleep. The proportion of quiet sleep increases and the proportion of active sleep decreases as they grow from newborns to infants. Newborns in REM sleep are often restless, with arms, legs and facial muscles moving almost constantly. Newborns enter REM sleep immediately, and do not develop the NREM/REM sleep sequence until 3 months of age. Over the first few months, proportion of REM sleep declines rapidly.


Eaton-Evans and Dugdale (1988) found that the number of sleep periods for a baby decreases until about 6 months of age, then increases until 9 months, before slowly decreasing again. This disruption between 6 and 9 months could possibly be a result of teething problems - the pain from teething leading to restless and disrupted sleep, with frequent periods of wake.

Baird et al (2009) found an increased risk of waking between midnight and 6 a.m. in infants between 6 and 12 months old whose mothers had experienced depressive symptoms during or immediately preceding pregnancy. Regular night waking in the first year is associated with later sleep disruption, behavioural problems and learning difficulties.

A real-world application of this research is emphasising the importance of the early establishment of regular sleep patterns in infants, as well as the importance of the mother's mental health during pregnancy. In order to reduce the risk of behavioural problems and learning difficulties later on in childhood, parents should attempt to settle their baby into a regular sleep pattern as soon as possible after the disrupted sleep between 6 to 9 months has passed. Research has also suggested that nurturing a regular sleep cycle can reduce the incidence of SIDS (sudden infant death syndrome) 

Puberty marks the onset of adolescence, where sexual and pituitary hormones are released in pulses during slow wave sleep (deep sleep.) Sleep quality and quantity do not change drastically, but external pressures and stress may lead to a less regular sleep cycle. Both sexes may begin to experience erotic dreams.


Crowley et al (2007) explained the change in the sleep patterns of adolescents as a result of changes in hormone levels, described as "delayed sleep phase syndrome" upsetting the circadian clock.

This study has a valuable real world application in education. Wolfson and Carskadon suggested that schools should begin later on in the day to accommodate for the poor concentration and attention spans of adolescents earlier in the morning. This change could have a positive effect on learning and productivity, thereby improving exam results and academic achievement.


A shallowing and shortening of sleep may occur in middle age, with increasing levels of fatigue. There is a decrease in the amount of stage 4 (deep) sleep, and it may be more difficult to stay awake. Weight issues may lead to respiratory problems such as snoring that can affect quality of sleep.


Van Cauter et al (2000) examined several sleep studies involving male participants, and found two periods of significant reduction in total amount of sleep: between 16 and 25, and between 35 and 50.


Only male participants were studies, so it is difficult to generalise to females too, and it would be beta bias to attempt this generalisation. Other sleep studies have demonstrated the importance of hormones in controlling circadian rhythms and the sleep cycle, suggesting that hormonal differences between men and women lead to different changes in sleep at different life stages. Also, environmental factors that affect the nature of sleep, such as stress, can affect men and women differently, meaning that the results cannot be generalised to both genders.


Conclusion



There is significant evidence to suggest that both the type and quantity of sleep vary tremendously over the course of the human lifespan, with neonates experiencing a different form of sleep to other age groups, and most people undergoing a steady decline in the proportion of REM sleep up until senescence.


Studies in this area use rigorous scientific methodology in their approach to studying sleep, often using electroencephalograph (EEG) machines to measure electrical activity in the brain over the course of a night's sleep. The use of sleep labs and EEGs provides a reliable and objective measurement of brain activity, but the use of them can impede on a study's validity. When participants sleep in a sleep lab, they are not exposed to external interruptions such as traffic or noisy neighbours that can reduce quality of sleep. Additionally, EEG equipment is bulky and uncomfortable to wear, which might also reduce quality. These factors mean that results gathered may not have very high external validity, and lack real-world generalisability.


Research also suffers from cultural bias, as many of the studies take place in the UK or the USA, and thus are more likely to include American and British participants. Assuming that results obtained are applicable globally is beta culturally biased, and likely to be incorrect - for example, many Mediterranean countries take "siestas" - daytime naps, helping split their sleep up into two blocks rather than one. Cultural practices such as these mean that conclusions drawn from studying American and British participants are unlikely to be cross-culturally applicable.

Monday, 9 November 2015

Infradian Rhythms

This follows on nicely from the circadian rhythms topic - similar construction, use of IDAs, description etc. I'll be focusing on Seasonal Affective Disorder and the menstrual cycle here as my two examples of infradian rhythms. Again, I'm writing in the style of an exam question response, so will only include as much detail as we could be expected to write in half an hour - but if anyone is interested in more studies to use, post a comment or message me!

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Control of infradian rhythms - EPs and EZs


Infradian rhythms are biological rhythms with a cycle length of more than 24 hours - such as the menstrual cycle, with a rhythm length of a month, or Seasonal Affective Disorder (SAD), with seasonal fluctuation. SAD is characterised by depression experienced in the winter months, which then disappears during the summer. There are two main endogenous pacemakers that control SAD: the hormone melatonin and the neurotransmitter serotonin. This two chemicals are normally in a balanced equilibrium, but when an increase in melatonin occurs, leading to a fall in serotonin levels, depression can develop. Light is a key exogenous zeitgeber in this cycle. During the winter months, when there is much less sunlight, the pineal gland will produce more melatonin, leading to a fall in serotonin levels that causes depression. This explains the winter onset and summer disappearance of SAD.

Evidence to support the serotonin-melatonin hypothesis of SAD comes from the successful real-world application of phototherapy in its treatment. Daily exposure to artificially heightened light levels using a light box has been shown to help alleviate symptoms of SAD. This, as well as being a valuable application of the theory, supports the idea of low light exposure causing SAD's melatonin rise and serotonin fall.

Further supporting evidence comes from a study by Eastman, who found that in SAD patients, a reduction in symptom severity was much more likely when exposed daily to bright morning light 
rather than dim evening light or a placebo. Again, this supports the concept of a sunlight deficiency being responsible for SAD - exposure to the bright lights triggered the release of serotonin, which would help restore a healthy serotonin/melatonin equilibrium.

However, Murphy measured the serotonin and melatonin of a sample of SAD sufferers and a control group of non-sufferers hourly over a few days, and found no significant differences between the groups for the levels of either chemical. This challenges the serotonin-melatonin hypothesis, suggesting that factors other than chemical levels are present in the development of SAD.

Biological reductionism is a key issue with this theory. During winter, many people have much lower levels of social contact and physical activity - both of these factors could play a role in the winter onset of SAD. It is overly simplistic to suggest that the disorder is purely a result of biological factors, when social factors could play just as important a role in explaining its winter onset.

The menstrual cycle is another infradian rhythm with an average cycle length of 28 days. It is regulated by the hormones progesterone and oestrogen, produced by the pituitary gland and the ovaries. As the cycle beginds, both hormone levels are low during menstruation, but a surge in oestrogen levels triggers ovulation. After this, progesterone levels steadily increase over two weeks to maintain the uterine lining to prepare for a pregnancy. If there is no fertilisation of the egg after two weeks, a drop in both hormone levels triggers menstruation, restarting the cycle.

Research has demonstrated that the pheromones of other women are an exogenous zeitgeber that affect hormone levels, influencing the menstrual cycle. Russell used a single blind trial, applying cotton pads containing traces of pheromones from the "odour donor" to participants, and found that the participants' menstrual cycles synchronised with those of the donor. This supports the concept of pheromones as an EZ that can influence this infradian rhythm.

Through its use of a single-blind trial and a control group to help establish cause and effect, this study uses fairly scientific methodology to gather data. This lends credibility to the results - due to these controls, it has high replicability, meaning it can be repeated with similar results on different sample groups, helping establish and generalise a conclusion.


However, a study by Yang and Schank challenged this conclusion. Studying 186 pairs of Chinese women who lived together, controlling for other studies' methodological errors, no synchronisation of cycles was found other than by pure chance. This suggests that pheromones are not an EZ that affects this biological rhythm, and the menstrual cycle is only controlled by hormonal EPs.

However, a key issue with this study is that despite a large sample size, it lacks generalisability, and is culturally biased when attempting to apply it. Studying only Chinese women and then attempting to globally apply the results is imposing an etic construct on all cultures - especially as high levels of environmental pollution in China could interfere with pheromone transmission, making Chinese women particularly unrepresentative of the global population. This attempt at global generalisation imposes an etic as it marginalises potential differences between ethnicities and cultures that could mean the results do not apply cross-culturally.  


Conclusion


In conclusion, while there is evidence to support the role of exogenous zeitgebers in infradian rhythm control, both SAD and the menstrual cycle seem to be predominantly regulated by endogenous pacemakers. However, evidence has shown that free will can affect biological and biochemical systems, (Born et al, 1999, early wakers by choice have higher ACTH levels in the blood), so suggesting that SAD is entirely a result of uncontrollable biological processes is overly deterministic. Similarly, the true mechanism that controls infradian rhythms is likely to be a result of both biological and environmental factors - it is too reductionist to single out either type of factor as being entirely responsible.