Monday, 30 November 2015

Theories of relationship formation

Black - AO1 - Description
Blue - AO2 - Evaluation - studies
Red - AO2 – Evaluation - evaluative points/IDAs

Filter Theory


Kerckhoff and Davis proposed the filter theory, suggesting that relationship formation is based on systematic filtration of possible partners on three levels – starting from a "field of availables."

1 – Social demographic variables. Subconsciously, we filter down to a pool of people belonging to similar social demographics to us – same school, town, workplace etc. Individual characteristics play a very small role at this stage.

2 – Similarity of attitudes and values. Here, the pool is filtered based on the law of attraction – greater similarity brings better communication and a better chance of relationship developing further. Having similar hobbies, beliefs, and interests increases the chance that a relationship will develop further and more deeply.

3 – Complementarity of emotional needs. Once a couple is established in a fairly long term relationship, the relationship will develop for better or for worse depending on how well they fit together as a couple and mutually satisfy their needs. Similarities in the amount of emotional intimacy, sex, social interaction and physical proximity required increases the chance that the relationship will be successful in the long-term.

Kerckhoff and Davis provide supporting evidence for filter theory with their longitudinal study of student couples together for less or more than 18 months. Attitude similarity was the most important factor up until 18 months, after this, psychological compatibility and ability to mutually meet needs was the most important factor in determining the quality of the relationship.

The theory can be considered to have a degree of face validity, as it is common sense to assume that similarities in demographic factors, attitudes and values systems would lead to a more happy and successful relationship and would thus be filters that we apply in the selection process.

Spreecher challenged this hypothesis, suggesting that social variables are not the only initial filter, and that couples matched in physical attractiveness, social background and interests were more likely to develop a successful relationship.  This is supported by Murstein’s match hypothesis, which suggests that a significant factor in early attraction is the couple being of similar attractiveness levels – though people may desire the most physically attractive partner, they know in reality they are unlikely to get or to keep them, so they look for people of a similar attractiveness to themselves.

Gruber-Baldini et al (1995) carried out a longitudinal study of couples aged 21 and found that those who were similar in educational level and age at the start of the relationship were more likely to stay together and have a successful relationship, suggesting these two as factors ignored by Kerschoff and Davis in their Filter Theory.

An issue with filter theory is that it could be considered to be overly deterministic, failing to capture the dynamic and fluid nature of human relationships by its division into three distinct stages, and failing to take into account the role of free will in partner selection.  Not all couples will have the same priorities in their relationships at exactly the same stages, and to suggest so is too nomothetic, ignoring individual differences between couples. 

Another issue with filter theory is that it could also be considered to be overly reductionist, seeking to explain the complex nature of relationship behaviour as a result of simple filtration processes, selecting a partner through a process of elimination from a “field of availables.” This is potentially an oversimplification of relationship formation, and cannot definitively explain the formation of homosexual romantic relationships. Homosexual couples may not necessarily have the same experiences that lead to their relationship being initiated as heterosexual couples, so the theory could be considered to have a heterosexist bias.


Reward/Need Satisfaction Theory


Reward/Need satisfaction theory suggests in order to progress from early attraction, the two people need sufficient motivation to want to continue getting to know each other. Long-term relationships are more likely to be formed if the partners meet each others' needs, providing rewards in the form of fulfilment of a range of needs - including biologically based needs such as sex and emotional needs such as giving and receiving emotional support, and feeling a sense of belonging.

This theory works on two key principles of the behavioural approach: operant conditioning and classical conditioning. Through classical conditioning, doing activities you enjoy with your partner leads to a conditioned response of happiness to the conditioned stimulus of your partner, leading to an intrinsic feeling of happiness while being around your partner. Through operant conditioning, the sense of belonging and the fulfillment of emotional needs such as intimacy function as rewards in the positive reinforcement process, leading to a strengthened relationship between the couple - they will like each other more and want to spend more time together.

Supporting evidence for this theory comes from Argyll's explanation that relationship formation works as a means to the satisfaction of motivational systems. Argyll (1994) outlined several key motivational systems underpinning social behaviour, and explained how relationship formation satisfies several social needs, namely: Biological needs - collective eating, sex, Dependency - being comforted, Affiliation - a sense of belonging, and Self-esteem. These results support the theory of attraction developing around need fulfilment, with partners acting as means to the fulfilment of certain social needs. 

Further supporting evidence for reward/need satisfaction theory comes from Aron et al (2005), who gave 17 participants who reported being "intensely in love" MRI scans, finding that dopamine-rich areas of the brain showed much more activation when the participant was shown a photo of the person with whom they had fallen in love, in contrast to someone they just liked. The amount of dopaminergic activity was positively correlated with the degree to which they felt in love. This supports the role of operant conditioning as a component of reward/need satisfaction theory - just seeing the person they loved stimulated the release of dopamine in the brain's reward circuits.

In some ways, this study treated psychology as a science by using MRI equipment to give objective, scientific results, measuring the activation of dopamine reward circuitry in the brain. This gives the study a high degree of internal validity, providing an objective measurement of the brain activity it claims to measure. However, in some aspects this study was less scientific - the self-reported description of "very much in love" cannot be scientifically and objectively verified, and is not falsifiable.

A study by May and Hamilton (1980) also supports reward/need satisfaction theory, providing supporting evidence for the role of classical conditioning. Female participants evaluated photographs of men while listening to either rock music that stimulated a positive mood, to music that stimulated a negative mood, or no music at all. The participants gave much more positive evaluations of personal character, physical attractiveness and general attraction in the rock music condition than in the other two, suggesting that an association had formed between the positive feeling from the music, and the men they were evaluating.

Attraction does not necessarily equal formation of relationships - this theory ignores matching and the opportunity to meet. These studies and the theory only explain how a relationship develops once there is already a degree of mutual attraction between two people, not how this initial attraction develops, so they do not fully explain the initiation of relationships.

An issue with May and Hamilton's study is that it was only carried out on female participants. Research and evolutionary theories into sexual selection suggest that attraction develops differently in males and females, so to generalise from males to females without taking these potential gender differences into account would be beta gender bias, and likely to be an inaccurate generalisation. Research has suggested that males prioritise physical appearance in a partner, while females prioritise status and power - these differences in priorities must be taken into account.

Another issue with reward/satisfaction theory is that it is overly environmentally reductionist, explaining a complex human behaviour to be a result of simple behavioural learning based around reward mechanisms. The theory ignores social, cognitive and biological factors that could play a role in attraction, such as the social demographic variables described by Kerckhoff and Davis in their filter theory.

Thursday, 12 November 2015

Disruption of biological rhythms

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs

Shift work


Normally, exogenous zeitgebers change gradually, such as the changing light levels around the year. However, with shift work and jet lag, this change is rapid, and exogenous zeitgebers become desynchronised with endogenous pacemakers. For animals, this could lead to dangerous situations such as an animal leaving their dens at night when dangerous predators are around. In humans, the lack of synchrony may lead to health problems such as gastrointestinal disorders.

Shift workers are required to be alert at night and must sleep in the day, contrary to our natural diurnal lifestyle, and out of synchronisation with available cues from zeitgebers. Night workers experience a "circadian trough" - a period of decreased alertness and body temperature between 12 a.m. and 4 a.m. during their shifts, triggered by a decrease in the stress hormone cortisol. They may also experience sleep deprivation due to being unable to sleep during the day, as daytime sleep is shorter than natural night-time sleep, and more likely to be interrupted.

Czeisler (1982) studied workers at a Utah chemical plant as they adjusted from the traditional backwards shift rotation to a forwards shift rotation. Workers reported feeling less stressed, with fewer health problems and sleeping difficulties, along with higher productivity.  This was due to the workers undergoing "phase delay", where sleep was delayed to adjust to new EZs, rather than the traditional "phase advance", where sleep time was advanced by sleeping earlier than usual. These results suggest that phase delay is healthier than phase advance, as it is significantly easier to adjust to so carries less risk of circadian rhythm disruption.

Czeisler's findings have valuable real-world applications. For businesses employing shift workers, using a forwards rather than backward shift rotation will increase productivity and reduce the risk of employees making mistakes, as well as improve health due to phase delay being easier for the body's circadian clock to adjust to than phase advance.

Gordon et al (1986) found similar results to Czeisler that support the superiority of forward rotation over backwards rotation. Moving police officers from a backwards to a forwards rotation led to a 30% reduction in sleeping on the job, and a 40% reduction in accidents. Officers reported better sleep and less stress.

Studies suggest that there is a significant relationship between chronic circadian disruption resulting from shift work, and organ disease. Knuttson (1996) found that individuals who worked shifts for more than 15 years were 3 times more likely to develop heart disease than non-shift workers. Martino et al (2008) found a link between shift work and kidney disease, and suggested that kidney disease is a potential hazard for long-term shift workers. However, the use of correlations in these studies means that a direct cause and effect cannot be established, and there is not enough evidence to conclude that organ disease is a direct result of shift work - third, intervening variables cannot be ruled out.

The Chernobyl nuclear power plant and the Challenger space shuttle disasters both occurred during night shifts, when performance of workers was most impaired by the circadian trough. The catastrophic nature of these events emphasises the importance that should be placed on healthy shift rotations and the minimising of circadian disruption for workers in order to avoid further disasters.


There are four suggested approaches with which to deal with shift work and its circadian disruption.


  • Permanent non-rotating shift work allows the body clock to synchronise with the new exogenous zeitgebers and adapt to a specific rhythm. However, this is unpopular because not many people want permanent night work.
  • Planned napping during shifts has been shown to reduce tiredness and improve employee performance - but this is unpopular with both employees and employers.
  • Improved daysleep for night shift workers - keeping bedrooms quiet and dark, avoiding bright light and stimulants such as caffeine. However, this method can be disruptive of family life and lead to its own pressures.
  • Rapid rotation: rotating shift work patterns every two or three days avoids even trying to adjust to new exogenous zeitgebers. However, it also means that most of the time, rhythms are out of synchronisation, and there is controversy over the suggested effectiveness of this tactic.

The majority of shift work studies (Czeisler, Gordon et al, etc.) involve only male participants, so research into this topic is often gender biased, with the results often unrepresentative of females. Sex differences could mean differences in the levels of neurotransmitters such as orexin and serotonin that affect sleep cycles, so circadian disruption may affect males and females differently. This means that it would be beta gender bias to generalise results from men to women without taking these neurochemical differences into account.


Jet Lag


Jet lag is the disruption in circadian rhythms caused by travelling through multiple time zones very quickly by aeroplane, causing endogenous pacemakers to become desynchronised with local exogenous zeitgebers. This can result in a number of problems including fatigue, insomnia, anxiety, immune weakness and gastrointestinal disruption.

Flying west to east causes worse symptoms and a greater degree of circadian disruption than flying east to west, because phase advance is required in order to adjust to EZ changes when flying east, whereas phase delay is required in order to adjust to EZ changes when flying west. Studies into shift work demonstrate that phase delay is easier for the body's circadian clock than phase advance, causing a lesser degree of disruption and impairment.

Three ways of coping with jet lag have been suggested. Melatonin supplements are widely prescribed in the US to restore melatonin levels when jet lag has greatly disrupted circadian rhythms in order to restore the synchronicity between the internal clock (EPs) and EZs. Planning sleep patterns beforehand has been shown to help adjustment - if arriving in the daytime, stay awake on the plane, if arriving at nighttime, sleep on the plane. Splitting the travel into two days can also help, as each disruption is less severe and people have to make a less significant adjustment on the day of arrival.

Cho (2001) found that airline staff who regularly travelled across 7 time zones had a reduction in temporal lobe size and memory function, providing supporting evidence for the idea that chronic disruption of circadian rhythms due to jet-lag has long-term symptoms of cognitive impairment and neurological damage.

Recht, Lew and Schwartz (1995) provide supporting evidence for the idea that circadian disruption from jet lag is less severe when travelling from east to west, rather than west to east. They studied US baseball teams travelling between time zones for 3 years, and found that teams travelling east to west won 44% of games, whereas teams travelling west to east won only 37%. Although it could be that some teams were simply better than others, the length and sample size of the study means that this should even out. This suggests that phase delay (westward-bound) is easier for the circadian clock to adjust to than phase advance (eastward-bound.)

A significant issue with this study is gender bias, considering that it only studied male participants. Research has suggested that hormonal and neurological differences between males and females can influence sleep behaviour and, by extension, circadian rhythms, so results from this study may not apply to females too - they could be differently affected by circadian disruption resulting from jet lag. It is beta gender bias to generalise these results to both genders without taking potential hormonal differences between genders into account.  

Coren (1996) suggested several real-world applications of research into circadian disruption that can help reduce the severity of jet lag's circadian disruption. Firstly, sleep well before travelling - this will help avoid sleep deprivation. Secondly, avoid stimulants and depressants such as alcohol and caffeine - they will make the symptoms worse by further disrupting endogenous pacemakers. Thirdly, immediately adjust to local exogenous zeitgebers upon arrival - going out into the morning daylight as soon as possible helps resynchronise due to light's function as an EZ. Finally, adjust flight behaviour in anticipation; sleep if you're arriving at night, stay awake if you're arriving in the day.


Theories on the function of sleep

In the exam, you can be asked a 24-marker specifically on either restoration or evolutionary theories, so it important to know both of these in equal depth and breadth.

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Evolutionary theories of sleep


Evolutionary theories explain sleep as an adaptive behaviour - one that increases the chance of an organism's survival and reproduction, providing a selective advantage. Sleep has evolved as an essential behaviour due to this selective advantage it has provided over the course of our evolutionary history - animals who did not sleep were more likely to fall victim to predation, so could not go on to reproduce.

Meddis proposed the predator-prey status theory, claiming that sleep evolved to keep prey hidden and safe from predators when normal adaptive activities such as foraging are impossible - such as at night for diurnal animals, and in the day for nocturnal animals. Therefore, the hours of sleep required are related to an animal's need for and method of obtaining food, as well as their exposure to predators. Factors other than predator-prey status that can affect sleep behavior include sleeping environment and foraging requirements. Sleep evolved to ensure animals stay still and out of the way of predators when productive activities are impossible, so the higher the vulnerability to predation, the safer the sleep site, and the lesser the time required to spend foraging, the more time an animal should spend sleeping.

This explanation is supported by the fact that animals are often inconspicuous when sleeping - taking the time beforehand to find themselves adequate shelter to keep them hidden from predators. This also explains the early stages of the sleep cycle, "light sleep", as a transitional phase from wake to sleep, allowing the animal to ensure their own safety in their immediate environment before completely losing their alertness.

A study by DeCoursey also supports this explanation. 30 chipmunks had their suprachiasmatic nuclei (a part of the brain involved in regulation of the sleep/wake cycle) removed, and were released into the wild. All 30 chipmunks were killed by predators within 80 days, suggesting that sleep patterns are vital in ensuring the safety of an animal in its natural habitat.

A strength of DeCoursey's study was the scientific validity provided by the use of control groups, treating psychology with rigorous scientific methodology. Three groups of chipmunks were used: one with SCN damage, one who had brain surgery but no SCN damage (to control for the stress of brain surgery) and a healthy control group. The use of these controls mean that cause and effect can easily be determined - it can be reliably established that circadian disruption due to SCN damage increase the risk of death due to predation.

However, a study by Allison and Cicchetti challenges this explanation, finding that on average, prey sleep for fewer hours a night than predators - Meddis suggested the opposite trend, so his theory conflicts with these results.

The predator-prey status theory of sleep is holistic, compared to Webb's hibernation theory. Rather than only focusing on one factor, (status), Meddis suggested that several factors other than this can influence sleep behaviour, such as site of sleep (whether it's enclosed in a nest or a cave, or exposed on prairies or plains) and foraging requirements (whether it requires lots of grazing on nutrient-poor found sources, or relatively few hours gathering nutrient-rich foods such as nuts or insects.) A holistic theory that takes into account multiple factors is likely to be able to provide the best explanation for the complex behaviour that is sleep.

A problem with explaining sleep as a means to safety from predation is that many species may actually be far more vulnerable during sleep, and it would be safer to remain quiet and still yet alert. However, some species have adapted to this need for vigilance: porpoises only sleep one brain hemisphere at a time, while mallards sleep with one eye open to be able to see potential threats. The phenomenon of snoring also challenges this explanation, as it is likely to draw attention to the otherwise inconspicuous sleeping animal, and increase their risk of predation.

Webb proposed the hibernation theory, claiming that sleep evolved as a way of conserving energy when hunting or foraging were impossible. This theory suggests that animals should sleep for longer if they have a higher metabolic rate, as they burn up energy more quickly, so are in greater need of energy conservation.  Conservation of energy is best carried out by limiting the brain's sensory inputs, i.e. sleep.

Berger and Philips found that sleep deprivation causes increased energy expenditure, especially under bed rest conditions. This suggests that sleep does conserve energy, and is especially useful when you're not doing normal activities.

Studies have found a positive correlation between metabolic rate and required sleep duration - small animals such as mice generally sleep for longer than larger animals, supporting the idea that sleep is adaptive as a form of energy conservation.

In times of hardship, such as when food is scarce or the weather too cold, animals sleep for longer, suggesting that sleep helps them conserve all the energy they can when resources are scarce and every calorie is critical for survival.

However, not all organisms follow this general trend, and there are some extreme outliers that challenge this theory. The sloth, a relatively large animal with a slow metabolic rate sleeps for approximately 20 hours a day, challenging the general trend that Webb's theory.

REM sleep, characterised by high levels of brain activity, actually uses the same amount of energy as waking. If REM sleep did not serve some other purpose, it would be maladaptive, as it does not help conserve energy due to the high levels of brain activity.


Overall evaluation of evolutionary theories of sleep


Mukhametov (1984) found that bottlenose dolphins sleep with one cerebral hemisphere asleep at a time, allowing them to be asleep yet alert and moving simultaneously. This adaptation supports the theory that sleep behaviour adapts to suit their environment and better resist selection pressures, lending credibility to the approach.

Generally, evolutionary theories of sleep are holistic, looking at the entire lifestyle of an animal rather than single factors in an attempt to explain and predict sleeping behaviour. Holistic, complex theories are likely to be able to provide the best full explanation for the complex behaviour that is sleep.

Much of the evidence in support of this approach is based on observation of captive animals, rather than animals in the wild. It may not accurate reflect natural animal behaviour, so these studies may lack validity. Also, these theories are impossible to test through experiment or observation, as evolution happens over thousands of years, so it is not particularly scientific.

Finally, this approach is overly deterministic, seeing sleep behaviour in humans as well as animals as being entirely caused by our evolutionary past, with no role for free will. This is an oversimplification - there is evidence to suggest that free will can play a role in influencing biological processes such as sleep.


Restoration theories of sleep


Restoration theories explain the physiological patterns associated with sleep as produced by the body's natural recovery processes. Oswald explained NREM sleep as responsible for the body's regeneration, restoring skin cells due to the release of the body's growth hormone during deep sleep. He suggested that REM sleep restores the brain.

Oswald's theory is supported by the findings that newborn babies spend large amounts of time in proto-REM sleep (a third of every day.)  This is a time of massive brain growth, with the development of new synaptic connections requiring neuronal growth and neurotransmitter production. REM is a very active phase of sleep, with brain energy consumption similar to waking, so Oswald's theory can explain this phase and why it's so dominant in newborns.

Oswald also found that sufferers of severe brain trauma such as drug overdoses spend much more time in REM sleep. It was also known that new skin cells regenerate faster during sleep - Oswald used these results to conclude that REM sleep is for restoration of the brain, and NREM sleep is for restoration of the body.

Jouvet (1967) placed cats on upturned flowerpots surrounded by water, which they would fall into upon entering REM sleep. Over time, the cats became conditioned to wake up upon entering REM sleep, depriving them of the vital fifth stage of sleep. The cats became mentally disturbed very quickly, and died after an average of 35 days. This supports Oswald's theory: the cats had NREM sleep and suffered no obvious physical ailments, buts died from organ failure brought on by brain fatigue, resulting from the lack of REM sleep. 

Jouvet's use of non-human animals raises an important issue. As well as being potentially considered unethical due to the extreme cruelty inflicted upon the animals for relatively little in the way of socially important results, the use of cats is a problem due to physiological differences in the mechanisms controlling sleep in humans and cats, meaning that it is anthropomorphic to generalise the results to humans.

Horne's restoration theory suggests that REM and deep NREM sleep are essential for normal brain function, as the brain restores itself in these stages of "core sleep." Light NREM has no obvious function - Horne refers to it as optional sleep, that might have had a role in keeping the animal inconspicuous by ensuring safety before its progression to deep sleep. Entering NREM causes a surge in growth hormone release - but this is unlikely to be used for tissue growth and repair, as nutrients required will have already have been used. He therefore theorises that bodily restoration takes place in hours of relaxed wakefulness during the day, when energy expenditure is low and nutrients are readily available.

Supporting evidence for Horne's theory comes from  sleep-deprived participants given cognitive tasks to carry out. They can only maintain reasonable performance through significantly increased effort, suggesting that sleep deprivation causes cognitive impairment because the brain has not had enough sleep necessary to maintain prime cognitive function.

Radio DJ Peter Tripp managed to stay awake for 8 days (200 hours). During this time he suffered delusions and hallucinations so severe it was impossible to test his psychological functioning. It is thought that sleep deprivation caused these effects as the brain was unable to restore itself. This supports Horne's theory, as having no REM or NREM lead to cognitive disturbances, rather than any physical impairment.

Randy Gardner remained awake for 11 days (264) hours, suffering from slurred speech, blurred vision and paranoia. He had fewer symptoms than Tripp despite being awake for longer, and soon managed to adjust back to his usual sleep pattern after the experiment. This again supports Horne's theory - slurred speech, paranoia and blurred vision are likely to be a result of neurological rather than physical impairment due to lack of core sleep.

Both Tripp and Gardner's studies are case studies, meaning they lack generalisability to a wider population. The massive individual differences found between only two case studies suggest that individual differences alone play a large role in how the individual experiences sleep, and how much sleep they need, so individual differences affect sleep too much to draw any valid conclusions from case studies.

Also, Tripp and Gardner were both male. research has shown that hormonal differences and levels can play a large role in determining how the individual experiences sleep, so, taking into account hormonal differences between genders, it would be beta bias to attempt to generalise their results to females specifically.

Finally, a methodological issue in Gardner's study comes from the observation of symptoms like blurred vision. It is difficult to establish whether this has a psychological or physiological cause, as it could either a result of bodily impairment such as a malfunction of the optic nerve, or brain impairment such as occipital lobe malfunction, the part of the brain responsible for visual processing. This makes it difficult to establish what damage was done by the sleep deprivation - physical and mental as Oswald would suggest, or purely mental, as Horne would suggest?

Tuesday, 10 November 2015

Lifespan changes in the nature of sleep

This is part of the "nature of sleep" topic, concerning stages of sleep as well as lifespan changes. However, questions have never specifically asked about stages, and there is a lot more to talk about as well as a lot more AO2 for lifespan changes than there is for the sleep cycle. If people want a specific post on stages, I can do that as well, but this is all you should need as far as I know.


Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Lifespan changes in the nature of sleep



The nature of sleep varies dramatically over the course of the human lifespan, both in terms of the required duration of sleep, and the proportion of the different stages of sleep. The proportion of REM (rapid eye movement) sleep shows an overall decrease throughout the years, whilst the proportion of NREM (non-rapid eye movement) sleep increases.


Floyd et al (2007) reviewed nearly 400 sleep studies, and found that REM sleep decreases by an average of 0.6% a decade. The proportion of REM sleep starts to increase again from about age 70, though this may be due to an overall decline in sleep duration.


Neonates (newborn babies) sleep for over 16 hours a day over several sleep periods. After birth, they display "active sleep" - an immature form of REM sleep showing high brain activity, and "quiet sleep"- an immature form of slow-wave (deep) sleep. The proportion of quiet sleep increases and the proportion of active sleep decreases as they grow from newborns to infants. Newborns in REM sleep are often restless, with arms, legs and facial muscles moving almost constantly. Newborns enter REM sleep immediately, and do not develop the NREM/REM sleep sequence until 3 months of age. Over the first few months, proportion of REM sleep declines rapidly.


Eaton-Evans and Dugdale (1988) found that the number of sleep periods for a baby decreases until about 6 months of age, then increases until 9 months, before slowly decreasing again. This disruption between 6 and 9 months could possibly be a result of teething problems - the pain from teething leading to restless and disrupted sleep, with frequent periods of wake.

Baird et al (2009) found an increased risk of waking between midnight and 6 a.m. in infants between 6 and 12 months old whose mothers had experienced depressive symptoms during or immediately preceding pregnancy. Regular night waking in the first year is associated with later sleep disruption, behavioural problems and learning difficulties.

A real-world application of this research is emphasising the importance of the early establishment of regular sleep patterns in infants, as well as the importance of the mother's mental health during pregnancy. In order to reduce the risk of behavioural problems and learning difficulties later on in childhood, parents should attempt to settle their baby into a regular sleep pattern as soon as possible after the disrupted sleep between 6 to 9 months has passed. Research has also suggested that nurturing a regular sleep cycle can reduce the incidence of SIDS (sudden infant death syndrome) 

Puberty marks the onset of adolescence, where sexual and pituitary hormones are released in pulses during slow wave sleep (deep sleep.) Sleep quality and quantity do not change drastically, but external pressures and stress may lead to a less regular sleep cycle. Both sexes may begin to experience erotic dreams.


Crowley et al (2007) explained the change in the sleep patterns of adolescents as a result of changes in hormone levels, described as "delayed sleep phase syndrome" upsetting the circadian clock.

This study has a valuable real world application in education. Wolfson and Carskadon suggested that schools should begin later on in the day to accommodate for the poor concentration and attention spans of adolescents earlier in the morning. This change could have a positive effect on learning and productivity, thereby improving exam results and academic achievement.


A shallowing and shortening of sleep may occur in middle age, with increasing levels of fatigue. There is a decrease in the amount of stage 4 (deep) sleep, and it may be more difficult to stay awake. Weight issues may lead to respiratory problems such as snoring that can affect quality of sleep.


Van Cauter et al (2000) examined several sleep studies involving male participants, and found two periods of significant reduction in total amount of sleep: between 16 and 25, and between 35 and 50.


Only male participants were studies, so it is difficult to generalise to females too, and it would be beta bias to attempt this generalisation. Other sleep studies have demonstrated the importance of hormones in controlling circadian rhythms and the sleep cycle, suggesting that hormonal differences between men and women lead to different changes in sleep at different life stages. Also, environmental factors that affect the nature of sleep, such as stress, can affect men and women differently, meaning that the results cannot be generalised to both genders.


Conclusion



There is significant evidence to suggest that both the type and quantity of sleep vary tremendously over the course of the human lifespan, with neonates experiencing a different form of sleep to other age groups, and most people undergoing a steady decline in the proportion of REM sleep up until senescence.


Studies in this area use rigorous scientific methodology in their approach to studying sleep, often using electroencephalograph (EEG) machines to measure electrical activity in the brain over the course of a night's sleep. The use of sleep labs and EEGs provides a reliable and objective measurement of brain activity, but the use of them can impede on a study's validity. When participants sleep in a sleep lab, they are not exposed to external interruptions such as traffic or noisy neighbours that can reduce quality of sleep. Additionally, EEG equipment is bulky and uncomfortable to wear, which might also reduce quality. These factors mean that results gathered may not have very high external validity, and lack real-world generalisability.


Research also suffers from cultural bias, as many of the studies take place in the UK or the USA, and thus are more likely to include American and British participants. Assuming that results obtained are applicable globally is beta culturally biased, and likely to be incorrect - for example, many Mediterranean countries take "siestas" - daytime naps, helping split their sleep up into two blocks rather than one. Cultural practices such as these mean that conclusions drawn from studying American and British participants are unlikely to be cross-culturally applicable.

Monday, 9 November 2015

Infradian Rhythms

This follows on nicely from the circadian rhythms topic - similar construction, use of IDAs, description etc. I'll be focusing on Seasonal Affective Disorder and the menstrual cycle here as my two examples of infradian rhythms. Again, I'm writing in the style of an exam question response, so will only include as much detail as we could be expected to write in half an hour - but if anyone is interested in more studies to use, post a comment or message me!

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Control of infradian rhythms - EPs and EZs


Infradian rhythms are biological rhythms with a cycle length of more than 24 hours - such as the menstrual cycle, with a rhythm length of a month, or Seasonal Affective Disorder (SAD), with seasonal fluctuation. SAD is characterised by depression experienced in the winter months, which then disappears during the summer. There are two main endogenous pacemakers that control SAD: the hormone melatonin and the neurotransmitter serotonin. This two chemicals are normally in a balanced equilibrium, but when an increase in melatonin occurs, leading to a fall in serotonin levels, depression can develop. Light is a key exogenous zeitgeber in this cycle. During the winter months, when there is much less sunlight, the pineal gland will produce more melatonin, leading to a fall in serotonin levels that causes depression. This explains the winter onset and summer disappearance of SAD.

Evidence to support the serotonin-melatonin hypothesis of SAD comes from the successful real-world application of phototherapy in its treatment. Daily exposure to artificially heightened light levels using a light box has been shown to help alleviate symptoms of SAD. This, as well as being a valuable application of the theory, supports the idea of low light exposure causing SAD's melatonin rise and serotonin fall.

Further supporting evidence comes from a study by Eastman, who found that in SAD patients, a reduction in symptom severity was much more likely when exposed daily to bright morning light 
rather than dim evening light or a placebo. Again, this supports the concept of a sunlight deficiency being responsible for SAD - exposure to the bright lights triggered the release of serotonin, which would help restore a healthy serotonin/melatonin equilibrium.

However, Murphy measured the serotonin and melatonin of a sample of SAD sufferers and a control group of non-sufferers hourly over a few days, and found no significant differences between the groups for the levels of either chemical. This challenges the serotonin-melatonin hypothesis, suggesting that factors other than chemical levels are present in the development of SAD.

Biological reductionism is a key issue with this theory. During winter, many people have much lower levels of social contact and physical activity - both of these factors could play a role in the winter onset of SAD. It is overly simplistic to suggest that the disorder is purely a result of biological factors, when social factors could play just as important a role in explaining its winter onset.

The menstrual cycle is another infradian rhythm with an average cycle length of 28 days. It is regulated by the hormones progesterone and oestrogen, produced by the pituitary gland and the ovaries. As the cycle beginds, both hormone levels are low during menstruation, but a surge in oestrogen levels triggers ovulation. After this, progesterone levels steadily increase over two weeks to maintain the uterine lining to prepare for a pregnancy. If there is no fertilisation of the egg after two weeks, a drop in both hormone levels triggers menstruation, restarting the cycle.

Research has demonstrated that the pheromones of other women are an exogenous zeitgeber that affect hormone levels, influencing the menstrual cycle. Russell used a single blind trial, applying cotton pads containing traces of pheromones from the "odour donor" to participants, and found that the participants' menstrual cycles synchronised with those of the donor. This supports the concept of pheromones as an EZ that can influence this infradian rhythm.

Through its use of a single-blind trial and a control group to help establish cause and effect, this study uses fairly scientific methodology to gather data. This lends credibility to the results - due to these controls, it has high replicability, meaning it can be repeated with similar results on different sample groups, helping establish and generalise a conclusion.


However, a study by Yang and Schank challenged this conclusion. Studying 186 pairs of Chinese women who lived together, controlling for other studies' methodological errors, no synchronisation of cycles was found other than by pure chance. This suggests that pheromones are not an EZ that affects this biological rhythm, and the menstrual cycle is only controlled by hormonal EPs.

However, a key issue with this study is that despite a large sample size, it lacks generalisability, and is culturally biased when attempting to apply it. Studying only Chinese women and then attempting to globally apply the results is imposing an etic construct on all cultures - especially as high levels of environmental pollution in China could interfere with pheromone transmission, making Chinese women particularly unrepresentative of the global population. This attempt at global generalisation imposes an etic as it marginalises potential differences between ethnicities and cultures that could mean the results do not apply cross-culturally.  


Conclusion


In conclusion, while there is evidence to support the role of exogenous zeitgebers in infradian rhythm control, both SAD and the menstrual cycle seem to be predominantly regulated by endogenous pacemakers. However, evidence has shown that free will can affect biological and biochemical systems, (Born et al, 1999, early wakers by choice have higher ACTH levels in the blood), so suggesting that SAD is entirely a result of uncontrollable biological processes is overly deterministic. Similarly, the true mechanism that controls infradian rhythms is likely to be a result of both biological and environmental factors - it is too reductionist to single out either type of factor as being entirely responsible.

Thursday, 5 November 2015

Schizophrenia - cognitive therapies

One more schizophrenia post after this one - diagnosis, reliability & validity. This post will cover Cognitive Behavioural Therapy and Cognitive Behavioural Family therapy, two psychological treatments for schizophrenia. There's probably more here than you can expect to write in half an hour, so pick your favourite studies and relevant evaluative points and use those. The only one I would recommend to definitely use would be be Falloon et al (1985), as it is some strong supporting evidence for the efficiency of CBFT compared to CBT.

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points

Cognitive Behavioural Therapy


CBT is not a "cure" for schizophrenia, as the cognitive distortions and disorganised thinking associated with schizophrenia are a result of biological processes that will not right themselves when the correct interpretation of reality is explained to the patient. The patient is not in control of their thought processes. The goal of CBT is to help the patient use information from the world to make adaptive coping decisions - improving their ability to manage problems, to function independently and to be free of extreme distress and other psychological symptoms. CBT teaches them the social skills that they never learned, as well as how to learn from experience and better assess cause and effect. Skills taught often address negative symptoms, such as alogia, social withdrawal and avolition, and can include social communication skills, the importance of taking antipsychotics routinely, and managing paranoia and delusions of persecution by challenging the evidence for these irrational beliefs.

Cognitive Behavioural Family Therapy (CBFT) is designed to delay relapse by helping the family of the schizophrenic to support the patient, by methods such as stress management training, relaxation techniques, communication and social skills, emphasis on the importance of antipsychotic drugs, and assessment of expressed emotion. High levels of expressed emotion on scales of hostility, emotional over-involvement and critical comments have been linked to rehospitalisation, so CBFT uses cognitive and behavioural methods to lower the emotional intensity of the patient’s home life. It has two general goals: To educate family members about schizophrenia, and to restructure family relationships to facilitate a healthier emotional environment.

Laing suggested the most important factor in the progression of schizophrenia is the family and how they treat the patient. A study by Brown (1972) supports this - he studied family communication patterns in schizophrenics returning home after hospitalisation.  Results showed that communication was a critical variable in whether patients would relapse into a psychotic state – patients returning to homes with a high level of expressed emotion were much more likely to relapse than those returning to homes with a low level. This supports the role of expressed emotion in determining long-term outcomes for schizophrenics.

Vaughn + Leff (1976) studied 128 schizophrenics discharged from hospital and returned to their families. Communication patterns between family members were rated for EE. The crucial finding was that families showing high levels of negative expressed emotion (Hostility, over-involvement, criticism) were more likely to have their patient relapse than families showing low levels of negative EE. Relatives with high levels of negative EE responded fearfully to the patient, characterised by lacking insight into and understanding of the condition. 

Leff + Vaughn (1985) found that a high level of positive EE with communication patterns showing warmth and positive comments is associated with prevention of relapse. They concluded that not all expressed emotion is detrimental to the relapse prospects of the patient. 

Sarason + Sarason (1998) summarised key findings from research into EE and schizophrenia: 

  • Rates of EE in a family may change over time – during periods of lower symptom severity, rates of negative EE drop and vice versa. High rates of EE may only reflect periods of high symptoms severity, and not be an overall reflection of the family dynamic. 
  • Cultural factors may play a role in EE. The association between high EE rates and relapse has been replicated in many cultures, but cultural factors may influence rate of EE and the way it is communicated. Cross-cultural studies have shown that Indian and Mexican-American families show lower levels of negative EE than Anglo-American families. 
  • EE is not limited to families. The association between EE and relapse has been demonstrated with patients living in community care - the significant factor could be communication patterns between patient and those they live with, rather than with family. 

Falloon et al (1985) found a markedly lower relapse rate among schizophrenic patients receiving CBFT than in those just receiving individual CBT. FT sessions took place in the patients’ homes, with family and patient participating. Importance of medication was emphasised, and the family was instructed in ways in which to express both positive and negative emotions in a constructive, empathetic manner. Symptoms were explained to the family, and both family and patient were instructed in adaptive coping mechanisms. Large differences in effectiveness were found - 50% of those in the individual-therapy group returned to hospital over the course of the study compared with only 11% of those in the family-therapy group. 

However, investigators were mindful that patients undergoing CBFT may have improved more than the controls due to taking their medication more routinely – family therapy subjects complied better with their with their medication regimens.

However, these results were challenged by a study by the University of Hertfordshire, carrying out a meta-analysis of over 50 studies on the use of CBT from around the world. They only found a small therapeutic effect on schizophrenic symptoms such as delusions and hallucinations, which disappeared when blind studies were used, suggesting that CBT has a negligible effect in treating schizophrenia, if any at all.

Jauhar et al (2014) conducted a systematic review and meta-analysis of the effectiveness of CBT for schizophrenia, examining potential sources of bias. They found that overall, CBT has a small therapeutic effect on schizophrenic symptoms in the "small" range, but this effect reduces further when sources of bias are controlled for.


Sarason and Sarason suggested two ways in which findings have been misinterpreted. “Expressed emotion” has been misinterpreted to mean that the expression of any emotion is harmful, rather than just negative emotions – in fact, the expression of warmth and positive emotions can play a role in reducing relapse rates.


Secondly, some family members feel guilty for their emotional expression – it is important to emphasise to them that it does not play a direct causal role, but rather, is a factor that may influence relapse. Allowing them to feel guilt can actually increase the family's levels of negative expressed emotion, harming the patient's long-term prospects.

Global cultural differences mean that in some cultures, high levels of expressed emotion is not a social norm. Results from research suggest that cultures with lower levels of expressed emotion should have lower schizophrenia rates, but they do not, challenging the role of EE in the development and maintenance of schizophrenia.

There are ethical issues with comparing a therapy group with a control. If the therapy group improves and the control group doesn't, the researcher has knowingly denied the control group an effective method of treatment.

Schizophrenics do tend to lack social skills, and many of the negative symptoms (disorganised speech, speaking very little, lack of emotion) do help isolate them socially, so therapy that emphasises social functioning is appropriate and important in helping treat this specific aspect of the disorder.

Research into CBFT has not shown a causal role for the family in schizophrenia development, as Bateson once thought, but has shown that the family can be a powerful factor in determining the patient’s risk of relapsing to a psychotic state. CBFT used to restructure family relationships and reduce levels of negative EE is a crucial tool in both increasing the patient’s quality of life, and helping the family detect early warning signs of a relapse.

Studies that compare effectiveness between different therapies often do not measure outcomes in the same way - some look at attrition rates, some look at symptom severity before and after, and some look at relapse rates. Also, the "hello-goodbye effect" refers to the bias caused by patients who tend to exaggerate their symptoms before therapy, and exaggerate their progress after therapy, leading to inaccurate conclusions being drawn. This makes holistic comparisons of effectiveness between therapies difficult, and results must be handled with care.

CBT/CBFT does significantly improve functioning and reducing the suffering of schizophrenic patient, but it is not a cure, and is unlikely to work on its own. When used in conjunction with appropriate drugs, it tackles symptoms such as poor social skills, alogia and avolition, but is often unsuccessful at treating the more serious positive symptoms. Even when social function is the focus of therapy, it is not possible to increase social functioning to the level of non-schizophrenics.


Wednesday, 4 November 2015

Circadian Rhythms

Thought I'd try the circadian rhythms topic from the other half of the course next. There are still two schizophrenia topics left to write up - psychological therapies, and diagnosis, classification & validity - these should be finished sometime in the next fortnight. IDAs (issues, debates and approaches) are much more important on this side of the course, I'll try and clearly signpost them as I go. I will try and write this in the style of a response to an exam question into this topic, but if people are interested, I can write an additional post including more studies than necessary so you can pick and choose your favourites.

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Control of circadian rhythms - EPs and EZs


Circadian rhythms are biological rhythms with a cycle length of roughly 24 hours. They are controlled by both external factors (exogenous zeitgebers, or EZs) and internal factors (endogenous pacemakers, or EPs.) The two main EZs that affect the sleep/wake cycle, the best example of a circadian rhythm, are light (either natural or artificial) and social cues such as the sleep/wake cycles and behavior of those around us. The two main EPs that affect the sleep/wake cycle are the SCN (suprachiasmatic nucleus) and the pineal gland. The SCN is located within the brain, just behind the eye, and detects changes in light levels. When a decrease is detected, it stimulates the pineal gland, also in the brain, to release melatonin, a hormone that induces tiredness. When an increase is detected, the SCN stimulates the pineal gland to cease melatonin release, which helps wake us up when the sun rises in the morning. Thus, the hormone melatonin helps us regulate the sleep/wake cycle.

Stephan and Zucker (1972) provide supporting evidence for the role of the SCN in controlling circadian rhythms. They compared a control group of healthy rats to a group of rats that had undergone surgery to destroy their suprachiasmatic nuclei. Whilst before, the rats had maintained constant and reliable patterns of eating and movement, these patterns disappeared in the group that had had their SCN removed - being replaced with unpredictable and patternless behavior, the rats unable to maintain a steady circadian rhythm due to the brain damage. Even though the rats were kept in the exact same environment, those with damaged SCNs lost their circadian rhythms, demonstrating that the SCN is a crucial EP in the control of circadian rhythms.

In some respects, this study was highly scientific, taking place in a controlled laboratory environment and using a control group for causal establishment. This would normally mean that the study is highly generalisable, but despite its scientific replicability, care must be taken when generalising to humans, due to the nature of the sample. There are likely to be large physiological and neuroanatomical differences between the mechanisms that control the sleep/wake cycle in rats and in humans, and it is overly anthropomorphic to confidently extrapolate results from this study onto humans.

Another important issue is raised by the use of non-human animals in this study, and that is one of ethics. The levels of harm the rats were subjected to is not at all insignificant - 14 of the 25 rats died during surgery or as a result of complications, meaning the study is ethically questionable, even when factoring in the (small) scientific and social benefits of the results. Also, the severity of the surgery calls the experiment's validity into question - as over half of the sample died from it, it is highly possible that the surgery affected the rats' brains in some way unrelated to the SCN that could explain the elimination of circadian rhythms.

Further supporting evidence for the role of EPs in the control of circadian rhythms comes from Morgan (1995), who removed the SCN from hamsters. The hamsters without an SCN had their circadian rhythms eliminated, but the rhythms were restored when the SCN cells were transplanted back in. This study clearly demonstrates the role of the SCN as an EP in circadian rhythm control.

Generalisability to humans may be an issue, as there are definite physiological differences between the brains of humans and hamsters. Additionally, humans do not live in isolation as the hamsters did in this study, so other factors proven to have an influence on circadian control such as social cues affect the human sleep/wake cycle in ways that this study ignored.

Again, the validity of this study may be called into question, as the major operation may have contributed to the change in behaviour, weakening the supporting evidence for the role of EPs in controlling circadian rhythms.

However, a 1975 study by Siffre challenged the importance of EPs in the control of circadian rhythms. Siffre spent 179 days underground in a cave with no natural light or social cues to act as EZs, only artificial light on demand. During his stay, he recorded body temperature (another circadian rhythm), heart rate, blood pressure and sleep/wake pattern. His sleep/wake circadian extended from 24 to between 25 and 32 hours despite a fully functional SCN and pineal gland- his days became "longer."  On the 179th day, by his terms it was only the 151st day. This study demonstrates the importance of EZs such light and social cues in the control of circadian rhythms, proving that EPs such as the SCN alone are not enough - EZs are required as well in order to keep the sleep/wake cycle at a steady 24 hours. However, it does demonstrate that EPs can provide some regulation to circadian rhythms without light or social cues, as once the rhythms had increased to 25-32 hours, they remained fairly constant throughout.

Being a case study, an issue with Siffre's research is that it lacks generalisability, only studying one man. This means that results are likely to be unrepresentative of the general population, and care should be taken when applying them as such. Only studying a male is also problematic, as hormonal or neurological sex differences could mean that the mechanisms that control circadian rhythms in males and females function slightly differently, so it would be beta gender bias to apply Siffre's results to women too.

Luce and Segal studied circadian rhythms in the indigenous population of the Arctic Circle, where it is light all through the summer months, and dark all through the winter months, meaning light will not function properly as an EZ. Despite this, people who live there keep up a steady sleep/wake cycle of roughly 7 hours per night, due to the function of social cues as an EZ. This study supports the role of social cues as an EZ, but suggests that light is not as crucial as otherwise thought.

Cultural or ethnic differences in the indigenous people of the Arctic Circle could mean that these results cannot be accurately generalised to the global population, and it would be imposing an etic to attempt this. Having grown up and lived all their lives accustomed to this annual pattern of light, it is possible that they have developed biological or social mechanisms that allow them to function normally using only social cues as an EZ rather than light, and we shouldn't assume that the sleep/wake cycle of non-natives would regulate itself in the same way if exposed to the same patterns.


Conclusion


In conclusion, it seems that both exogenous zeitgebers and endogenous pacemakers are required for the healthy regulation of circadian rhythms - it is overly reductionist to study one in isolation, or to claim that EPs such as the SCN and pineal gland are responsible for controlling such complicated processes as the sleep/wake cycle and body. This theory ignores cognitive and social factors that play a role in the control of human sleep behaviour. Humans are able to make appraisals of situations and of the behaviour of others, and these factors play an important role in the control of circadian rhythms. Sleep is not purely a biological behaviour, and other approaches should not be completely ignored.


Additionally, claiming that human sleep behaviour is only a result of EPs and EZs is too deterministic, as humans generally have the free will to decide when we want to sleep or choose to partake in activities that will influence our sleep/wake cycle, thus altering the rhythm ourselves. This is demonstrated by everyone who alters their own sleep/wake cycles in order to participate in shift work. Born et al found higher ACTH levels in the blood of people who made themselves get up earlier, suggesting that even biological processes and factors can be influenced by free will.

Monday, 2 November 2015

Schizophrenia - the psychological explanation

I think this topic will follow on well from the biological approach to schizophrenia - it's a little more complex, bringing together the social, psychodynamic and cognitive approaches. Psychological treatments should follow on from this shortly - CBT and CBFT should be coming up soon. In the exam, it's highly unlikely that a question will ask about a specific explanation - questions into this area are much more likely to be "discuss 1/2/2+ psychological explanations for schizophrenia" rather than "discuss the psychodynamic/cognitive/social approach to schizophrenia."

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points


Psychodynamic explanations of schizophrenia


The psychodynamic approach rose to popularity in the mid-20th century as an environmental explanation, emphasising the causal role of the family in the development of schizophrenia. It explains schizophrenia as a regression to the Id-dominated oral stage, with little awareness of the outside world. "Primary narcissism" develops - delusions arise from the child feeling threatened and persecuted by the  outside world, but also feeling omnipotent over their internal world.

The schizophrenogenic mother


Fromm-Reichman and Kasanin were key psychologists in pioneering the concept of a "schizophrenogenic" mother, who was not schizophrenic herself, but would cause the development of schizophrenia in her children due to her treatment of them. The two central traits of the schizophrenogenic mother are being domineering (maternal overprotection) yet cold and uninvested (maternal rejection.)

Kasanin (1934) studied the parents of 45 schizophrenics and found maternal rejection in 2 patients, and maternal overprotection in 33. These results suggest that overprotection is the most significant quality of the archetypal schizophrenogenic mother, supporting the hypothesis of certain maternal behaviours inducing the development of schizophrenia in their children. 

Kasanin gathered data through interviews and case-report studies, prone to methodological problems such as researcher bias and subjectivity that reduce internal and external validity. The case reports were retrospective, so detail may have been recalled incorrectly, biased towards reporting maternal overprotection due to leading questions, or carrying the risk of false information due to social desirability bias. Also, the significant mental disturbances, possible substance abuse, and chronic use of antipsychotic medications all contribute towards a mental state where information from early life may not be recalled correctly.

Schofield and Balian also studied the early lives of schizophrenic patients. The only significant distance they found between schizophrenics and non-schizophrenics was the quality of the maternal relationships - schizophrenics were more likely to have had less affectionate mothers.

Schofield and Balian's study was a retrospective, in-depth interview on the profoundly mentally ill, many of whom had histories of substance abuse and chronic use of antipsychotic medications, meaning that information was likely to be unreliable.

Mischler (1968) carried out an observation of mothers with schizophrenic children and found the mothers to be aloof, unresponsive and emotionally distant - but only towards their schizophrenic children, behaving normally towards their non-schizophrenic children. This raises an important issue of cause and effect - the coldness and distance attributed to the schizophrenogenic mother may be her response to psychological disturbances in the child.

Parker criticised this theory by suggesting that there is no archetypal schizophrenogenic mother- there is a parental type distinguised by hostility, criticism and intrusiveness, but this type is not particularly overrepresented by the parents of schizophrenics.

Hinsie and Cambell criticised this hypothesis for ignoring the fact that many mothers fulfill Kasanin's criteria for schizophrenogenesis, but very few of them have children who develop schizophrenia, and not all schizophrenics have the archetypal schizophrenogenic mother, so the hypothesis is reductionist to suggest that the disorder is only a result of maternal relationships. The hypothesis is also reductionist in its failure to take evidence for a biological basis into account, such as the evidence for a significant genetic component.

Marital schism, marital skew and double bind


Lidz proposed the concept of "marital skew" and "marital schism" being traits found in the relationship between the parents of a schizophrenic. Marital schism refers to the open hostility and criticism that occurs when the parents are unable to adopt role reciprocity (the ability to understand each other's goals, roles and motivations.) Marital skew refers to the tendency of one parent to dominate interaction - usually an intrusive and domineering mother and a distant, passive father.

A problem with this theory is the inability to isolate the direction of the cause and effect relationship between marital skew and schizophrenia. The early symptoms of schizophrenia in childhood and the resultant psychological vulnerability in the child could cause one parent to be more involved than the other.

However, Lidz did find supporting evidence for the role of the family in the development of schizophrenia - he found that 90% of schizophrenics have a family background that is disturbed in some way, while 60% have parents who either one or both suffer from a serious personality disorder.

Bateson proposed the idea of "double bind" scenarios being responsible for schizophrenia formation - receiving conflicting emotional messages from the parents in early childhood, for example, emotional warmth one day, withdrawal and hostility the next, leads to the child losing their grip on reality and seeing their own feelings as unreliable - contributing towards schizophrenia development.

Kennedy analysed letters sent between schizophrenics and their parents, and compared the results to a control group. The results showed evidence of double bind scenarios, but, like Parker's criticism of the schizophrenogenic mother hypothesis, double bind scenarios were not particularly over-represented by schizophrenic patients. Double bind scenarios were observed in the analysis of letters between non-schizophrenics and their parents too - he concluded that the majority of people get mixed messages, but most don't develop schizophrenia, so double bind theory cannot exclusively explain the illness. 

Overall evaluation of psychodynamic explanations


The causal role of the family lacks reliable and objective empirical evidence, and established relationship so far is only correlational.

However, lots of research suggests an unstable family background may increase the risk of schizophrenia in children who already have a biological predisposition - Sorri et al's study of Swedish adoptive children of schizophrenics found that the quality of adoptive parenting was the most important factor in determining whether the children grew up to develop schizophrenia. Wahlberg (2000) examined earlier data and concluded that environmental factors such as family communication can strongly affect the chance of schizophrenia development in children with a genetic predisposition to the disorder.

Neil suggested that political and cultural conditions post-WWII influenced psychodynamic theories into schizophrenia, as psychologists such as Bowlby placed greater emphasis on the role of the mother in the early development of the child. It is possible that the sudden focus on maternal roles led to the scientific popularity of the schizophrenogenic mother hypothesis, rather than the theory having any specific credibility.

Cause and effect is difficult to determine when carrying out research in this area - early schizophrenic symptoms in a child can put significant levels of stress upon a family, causing potential instability and disturbance.


Cognitive explanations of schizophrenia


The cognitive approach seeks to explain schizophrenia as the result of faulty information processing. Frith explains it as a result of faulty "metarepresentation" the cognitive process that allows us to reflect on our thoughts and behaviour, generate thoughts, ideas and intentions, and to reflect on the thoughts and behaviour of others. 

Metarepresentation takes place through the action of two systems - the "supervisory attention system" that is responsible for self-generated actions, and the "central monitoring system", that is responsible for recognising our thoughts as our own, and external voices as belonging to others. Problems with the supervisory attention system lead to the negative symptoms such as alogia, catatonia and apathy, while problems with the central monitoring system lead to the positive symptoms such as hallucinations, delusions and thought disturbances.

Frith (1980) carried out a card guessing game with groups of both schizophrenics and non-schizophrenics, guessing whether a drawn card would be black or red. Non-schizophrenics made logical choices, taking into account probabilities and cards already drawn. Schizophrenics made very rigid decisions, finding it difficult to take self-generated cognitive actions and ideas into account, as well as probabilities.

Frith and Done (1986) carried out a verbal fluency assessment of schizophrenics and non-schizophrenics, where they were given a category and asked to generate lists. Schizophrenics performed very poorly in this task compared to the control group. In a similar visual fluency task involving categorisation, they performed equally poorly.

Bentall (1991) had schizophrenics either generate words for a list, or read off the list. A week later, they were read the words used in the initial test, and asked if they'd generated them or read them a week ago. Compared to a control group, schizophrenics performed very poorly.

Lots of empirical evidence supports the cognitive explanation - the above studies support the concept of some very definite cognitive impairment in schizophrenics, such as an inability to recognise self-generated thoughts. However, the patients' history of strong antipsychotic medication could account for the cognitive impairment in these tests.

The cognitive explanation manages to explain the both positive and negative symptoms of schizophrenia, as opposed to the dopamine hypothesis which only manages to explain the positive symptoms. However, it only explains the symptoms of schizophrenia, not the causes - what causes the metarepresentative faults in the first place?

Hemsley (1993) explained schizophrenic symptoms as a result of an inability to activate schemas. Schemas develop in early childhood as a way to categorise and process information from the outside world - if these fail to activate correctly, self-generated sensory information could be interpreted as external, causing an auditory hallucination.

This theory manages to explain the origins of auditory hallucinations, but has little empirical evidence to support it.


Social explanations of schizophrenia


Social causation theory seeks to explain the overrepresentation of schizophrenics in poor and deprived urban populations as a result of factors such as poor education, diet, healthcare, access to drugs, unemployment, overcrowding and stress.

Social drift theory suggests that schizophrenia development is not a consequence of deprivation, but a cause of it - schizophrenia symptoms lead to economic and social hardship due to being unable to hold down jobs, mortgages, relationships, which leads to moving into poor urban areas.

Castle (1993) studied Camberwell, a deprived area of  London, and found that the majority of schizophrenics were born locally, as opposed to having moved there after developing schizophrenia, supporting the causation theory and challenging drift.

However, potential effects of drift cannot be ruled out - it is possible that the schizophrenics born there also had schizophrenic parents who moved there due to socioeconomic hardship - and evidence suggests a definite genetic basis to schizophrenia.

Overcrowding is a common issue affecting wellbeing in deprived areas, leading to greater exposure to viruses - if the viral explanation is true, that could explain how socioeconomic hardship increases the risk of schizophrenia development. Malnutrition is also a problem - poverty leads to malnutrition, which leads to illness and possibly abnormalities in the development of brain anatomy, another theorised explanation of schizophrenia.


Sunday, 1 November 2015

Schizophrenia - drug therapies

This topic follows along nicely from the last one, and again, is relatively straightforward. AO1 is fairly simple here, and AO2 evaluation works with reference to effectiveness (using research evidence) and appropriateness, (using more specific IDA-style evaluative points.) As with all my posts, if I come across any useful new information at any point, I will update this.

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points


Typical antipsychotics


First developed in the 1950s, this class includes chlorpromazine, haloperidol and fluphenazine. They act as dopamine antagonists, reducing the neurotransmitter levels by blocking D2 receptors in the brain's dopamine pathways. Generally, they are administered either as a course of tablets, or as a "depot injection", a single injection every 2-4 weeks which releases the medication slowly over time. Both classes of antipsychotics work on the dopamine hypothesis - the idea that abnormally high levels of the dopamine neurotransmitter due to oversensitive receptors cause schizophrenia, and, due to this, only really function to treat the positive symptoms such as hallucinations, delusions and thought disturbances.

There are several potentially harmful side effects than can result from the use of typical antipsychotics. Side effects vary between specific chemicals, but are likely to include muscle stiffness, cramp, tremors, and extrapyramidal symptoms (drug-induced movement disorders) such as muscle spasms, rigidity, involuntary muscle movement and restlessness.

Chronic use of atypical antipsychotics carries the risk of causing development of tardive dyskinesia, a serious degenerative disorder characterised by repetitive and involuntary muscle movements such as excessive blinking, grimacing and limb twitches.


Atypical antipsychotics


Atypical antipsychotics such as clozapine, risperidone and olanzapine also act dopamine antagonists, interfering with post-synaptic D2 dopamine receptors to prevent the transmission of dopamine across the synapse. They can also reduce serotonin levels by blocking receptors.

Compared to the most common typical antipsychotics such as haloperidol, they are less likely to cause extrapyramidal motor impairment, as well as the development of the serious motor disorder tardive dyskinesia. 

However, they have been found to carry an increased risk of stroke, blood clots and sudden cardiac arrest. They can also cause agranulocytosis – low white blood cell count, resulting in immune weakness and increased susceptibility to infection.

Compared to many antipsychotics, atypicals carry a very low risk of withdrawal symptoms – possibly on account of the drugs being absorbed into adipose tissue and slowly released. 

Effectiveness of drug therapies


Stargardt et al (2008) - investigated cost effectiveness of typical and atypical antipsychotics by comparing drug costs to rehospitalisation rates after a course of treatment. Studying 321 patients, they found no statistically significant differences in the effectiveness of either class – Atypical antipsychotics were often more useful in the most severe cases, whereas typical antipsychotics were often more useful in less severe cases. Atypical antipsychotics also have a much smaller risk of harmful side effects – but they are also more expensive. They concluded that in terms of efficiency balanced between cost and effectiveness, there is no great difference between either class of antipsychotic.   

Correll + Schenk (2008) – 28000 participants stratified by age across 12 trials, assessing incidence of tardive dyskinesia in patients treated with typical or atypical antipsychotics. Across all the trials, TD risk is 3.9% for atypical, 5.5% for typical, with significant variations in incidence between trials. Four adult trials even suggest that compared to unmedicated schizophrenics, atypical antipsychotics actually reduce the risk of TD development: 13.1% incidence for atypicals, 15.6% for unmedicated, 32.4% for typicals. Overall, results suggest a lower risk of tardive dyskinesia in patients medicated by atypical antipsychotics than those medicated by typical antipsychotics.

Bagnall et al (2003) – Meta-analysis of studies into attrition across 2000 schizophrenic participants for 2 years of antipsychotic medication. A higher attrition rate was assumed to indicate lower satisfaction with the treatment, resulting from more side effects and less effective relief of symptoms. Generally, fewer participants left trials early from atypical drug groups than from typical drug groups, suggesting that patients found atypical antipsychotics more acceptable. Individuals with schizophrenia may have found atypical antipsychotics more acceptable than typical counterparts – even if they are no more effective in the relief of symptoms, they have fewer negative side effects.

Davis (1980) - analysed the results of 29 studies (3519 people), they found that relapse occurred in 55% of the patients whose drugs were replaced by a placebo compared to 19% in those patients who remained on the real drug. Suggests significant effectiveness of the drug – however, challenged by Ross + Read (2004) – 45% of patients given a placebo did not relapse. The effectiveness of drug treatments is challenged as 45% of patients did well on the placebo therapy. However, the fact that 45% of patients did not relapse when antipsychotics were replaced by a placebo challenges the actual effectiveness of drug treatments.

Overall evaluation of drug therapies


Antipsychotic drugs are a relatively cheap, effective treatment, reducing symptoms in the majority (roughly 70%) of patients and allowing them to live comparatively normal lives.

However, like all drug therapies for mental illness, they treat symptoms, not underlying causes. This means that the underlying disorder is not treated – and relapse is likely when the course of drugs is discontinued. This can lead to “revolving door syndrome” – where patients are readmitted and given a new course of antipsychotics very soon after completing a prescribed course of treatment – remaining on a cycle of rehospitalisation and medication for potentially many years.

Antipsychotics are generally better at treating positive symptoms such as hallucinations and delusions rather than negative symptoms such as apathy and alogia, suggesting that factors other than an excess of dopamine may be responsible for negative symptoms.

Ethical issues can result not only from the often severe side-effects, but also from the use of drugs as a “chemical straitjacket.” There are potential ethical concerns with compulsory medication for schizophrenics – forcing patients to take drugs against their will as a form of social control.

Schizophrenia - the biological explanation

Thought I'd start with this topic as it's quite straightforward. I probably won't be posting these in any particular order - diagnosis and classification, treatments of schizophrenia etc. will all follow in due time. As with all my posts, if I come across any useful new information at any point, I will update this.

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points


The Dopamine hypothesis 


The hypothesis that underpins most biological explanations and drug therapies for schizophrenia, this suggests schizophrenia is simply a result of heightened levels of the neurotransmitter dopamine, caused by oversensitive D2 dopamine receptors.

This can be measured by:

  • MRI scans measuring activity/density of post-synaptic dopamine receptors and levels.
  • Comparison of dopamine levels between schizophrenics and a control group.
  • Studies where schizophrenics are given dopamine agonists, resulting in an increase in symptom severity.

Amphetamines


Amphetamines are a class of CNS stimulant that act as agonists (increasing activity) for the neurotransmitters adrenaline and dopamine. Addicts can develop amphetamine psychosis - giving hallucinations and delusions in a similar form to those that result from schizophrenia, suggesting that heightened dopamine levels may explain some of the positive symptoms.

Homovanillic acid, produced as the body metabolises (breaks down) dopamine, is found in increased concentrations in the urine of schizophrenics. This supports the dopamine hypothesis - increased dopamine levels in schizophrenics, but correlation does not mean causation - another biological mechanism responsible for schizophrenia could also lead to these increased levels as a secondary effect.

Brain scans have found a greater density of dopamine receptors in schizophrenics - suggesting that the condition may well be a result of greater dopamine sensitivity. However, the patients studied already had schizophrenia, so cause and effect cannot be established - it may be the case that increased D2 density might be a response to either the condition itself, or the dopamine antagonists commonly prescribed as antipsychotic medication.

Some studies into the dopamine hypothesis have shown that schizophrenics who take amphetamines show increased symptom severity, but non-schizophrenics given the same dose showed no symptoms of schizophrenia, suggesting that schizophrenics have a higher sensitivity towards dopamine, rather than objectively higher levels.


The relative levels of success in the use of dopamine antagonists (reducing activity) as treatment supports the dopamine hypothesis, however, 1 in 3 schizophrenics do not respond to antagonists, suggesting that there must be other factors.

This suggests that the dopamine hypothesis is overly reductionist - factors other than neurotransmitter levels must play a role in the development of such a complex condition, as the hypothesis cannot explain all of the symptoms. 

Amphetamine psychosis only explains the positive symptoms of schizophrenia - excess dopamine levels in addicts can lead to the positive symptoms, but not the negative symptoms. Dopamine antagonists are also only useful for the treatment of these positive symptoms, suggesting that the dopamine hypothesis can only really explain the symptoms such as hallucinations, delusions and thought disturbances.

Genes


There is evidence to suggest that genes can play a causal role in the development of schizophrenia, as concordance rates are higher between more genetically similar individuals. Several concordance studies support this.

Gottesman carried out a meta-analysis of 40 European studies into schizophrenia which looked at the condition's incidence rates.
  • General public: 1%
  • Sibling of schizophrenic: 10%
  • Son or daughter of schizophrenic: 10%
  • Dizygotic (non-identical) twin of schizophrenic: 17%
  • Monozygotic (identical) twin of schizophrenic: 48%
These results would seem to support the concept of a genetic basis for schizophrenia, but not a purely causal relationship.

More genetically similar individuals have more environmental similarities - upbringing, how they are treated. These environmental similarities could help explain the greater incidence rates.

A methodological flaw emerges in the use of a meta-analysis. Different studies analysed used different research methods and diagnosis classifications, lowering the validity of the research. Additionally, there were large variations in concordance rates between studies.

The results suggest schizophrenia is not completely genetic, or else monozygotic twins would have an 100% concordance rate due to being genetically identical. Therefore, these results support the diathesis-stress hypothesis, the idea that schizophrenia is a result of genetic predisposition requiring an environmental trigger to cause development of the disorder.

Heston (1966) studied adoptees, seeking to eliminate the interference between genetics and upbringing in establishing a concordance rate. He studied 47 adopted children that were born to a schizophrenic mother and then adopted by non-schizophrenics. A control group was used to counter the stress of adoption potentially contributing to the development of schizophrenia. 
  • 5/47 of the children developed schizophrenia, just over 10%, the same as the rate of incidence in children of schizophrenics found in Gottesman's study. 
This supports the concept of a definite role of genetics in the development of schizophrenia.

Even though they were adopted at birth, the children still spent 9 months in their mothers' womb during gestation - they could have been exposed to schizophrenogenic drugs, so a shared environment cannot be completely ruled out.

Heston didn't investigate the children until adulthood, so all sorts of environmental factors in childhood and adolescence could have played a role in schizophrenia development.

Sorri studied Swedish adoptees with schizophrenic biological mothers. He found that the chance of schizophrenia development depended on the quality of adoptive parenting - these children still had a greater risk of schizophrenia development, but it was upbringing-dependent. This further supports the diathesis-stress hypothesis, that schizophrenia has a genetic basis that requires environmental activation. 

Gottesman and Shields compared concordance in a meta-analysis of 5 studies on severe schizophrenics, and found a concordance rate of between 75% and 91% - strongly supporting a genetic basis to at least the most severe cases of schizophrenia.

Overall, the studies into the genetic hypothesis suggests that genes can provide a biological basis, making an individual more predisposed to schizophrenia, which will then require certain environmental triggers to cause the condition's full development.

Brain anatomy


There is evidence to suggest a smaller brain size in schizophrenics - enlarged ventricles in the brain lead to an overall reduction in the volume of brain matter. The first evidence came from early autopsies of schizophrenics - meaning cause and effect could not be established, as it could be that schizophrenia caused the anatomical differences, not vice versa. Also, these patients had had a long history of antipsychotic medication, which could have affected anatomy, along with potential physical trauma or substance abuse.

Brain scans are a more contemporary method of studying neuroanatomy. Early CAT scans showed that 25% of schizophrenics had enlarged ventricles, compared to a miniscule proportion of healthy controls. However, 25% is very inconclusive - it meant that 75% had no anatomical differences to healthy individuals.

Crowe et al (1989) used Magnetic Resonance Imaging (MRI) to study schizophrenics, and found a reduction in brain matter in the hippocampus in the temporal lobe, especially on the left hand side of the brain. 

Goldstein et al' (1999) used MRI to find a reduction in brain matter volume in the paralimbic cortex, a group of brain structures involved in emotion processing, goal setting, motivation and self control - all functions that the symptoms of schizophrenia impair in some way.

MRI technology is not specific enough to reliably pinpoint specific parts of the brain as being dysfunctional. It shows blood flow to regions, which, while roughly correlational, is not a direct measure of brain activity.

Enlarged ventricles are found to be a predisposing factor for many disorders, and are more likely to be an indicator of general susceptibility to psychiatric disorders than schizophrenia specifically.

The viral explanation


Research suggests that if the mother contracts the flu virus (influenza A) during the 2nd trimester of pregnancy, the child is significantly more likely to develop schizophrenia. 

O'Callaghan et al (1991) looked at the 1957 influenza outbreak, and found that children in the 4th-6th month of gestation during the outbreak had a particularly incidence of schizophrenia.

Sham et al (1992) examined the relationship between flu outbreaks and reported schizophrenia incidence across several decades. They concluded that schizophrenia was more common amongst those who had been in the womb during the outbreaks. However, the majority of schizophrenics had not been exposed to influenza A in utero, alone, this hypothesis is not a complete explanation of schizophrenia. 

Correlations cannot show cause and effect - a third, intervening variable that explains the correlation between antenatal influenza A exposure and schizophrenia development.

Overall evaluation of the biological explanation of schizophrenia


While biological factors such as dopamine levels, genes, neuroanatomy and viral exposure cannot fully explain the development of schizophrenia, and are reductionist in their simplification of schizophrenia to merely the result of certain biological processes, there is evidence to suggest that biology can provide a predisposition to schizophrenia, which then requires environmental circumstances to trigger the disorder's development. The diathesis-stress hypothesis is the model of schizophrenia most likely to be correct, stating that nature provides a basis to schizophrenia, which then requires environmental activation to lead to the full development of the illness.