Archive for the ‘Psychology’ Category
DE300: Investigating Psychology 3: Developmental psychology: cognitive development and epistemologies
Chapter 5 Developmental psychology: cognitive development and epistemologies runs to around 40 pages and is one of the chapters for TMA3. It will have a fairly familiar feel for anyone who has done one of the child development modules.
Introduction gives a very brief overview of how diverse the fields of study are in developmental psychology and equally how diverse the methods used are.
Piagetian and Vygotskian perspectives on cognitive development. A Piagetian perspective introduces Piaget’s constructivist approach built on the idea that children construct their understanding of the world by way of developing schemas and take place over a number of stages in their lives. So, we have from 1-4 months primary circular reactions (i.e. repetitive motions centred on themselves), from 4-8 months secondary circular motions (i.e. repetitive motions with effects away from their body) and from 12-18 months tertiary circular reactions which are experimental in nature. His theory is based on the idea that all children in all cultures will process through a series of stages in their lives: birth to 2 years sensorimotor (developing object permanence), 2-7 years preoperational (use of language to represent objects), 7-11 years concrete operational (logical reasoning, mastering conservation), 11-18 years formal operational (logical thought applied to abstract ideas). This implies that play is an important element in development although the relationship between play and guided play differs between cultures. Criticism of this approach comes from many angles in particular that individual children can be at different stages in different domains at any given point in their lives. Piagetian methods illustrates the development of his ideas through his work with Binet on IQ scales where he was interested in the errors that children made and in particular the systematic way in which this happened. He noted that children weren’t miniature adults and emphasised the idea that children needed to be allowed to talk freely in research, avoiding too many questions but rather allowing them to elaborate on their thinking: moving towards open-ended questions and semi-structured interviews. His three mountains task (Piaget, 1969) illustrated the egocentric nature of children and is difficult for 4-5 year olds but easy enough at 9-10. Other tasks were around conservation of mass, volume and number which proved difficult before around 7 (i.e. at preoperational stage) as are class inclusion problems (e.g. are there more red flowers or more flowers in a bunch of red and yellow flowers)(Goswami, 2014). He didn’t consider the effects of peers and the social situation on learning until much later nor did he consider the human sense aspect (e.g. hiding from a policeman variant of the three mountains task and similar variants of the conservation tasks [Donaldson, 1983]). A Vygotskian perspective takes a social-constructivist approach developing using the concept of the Zone of Proximal Development (ZPD) and scaffolding of learning: cultural tools together with social interaction produce the skills and abilities that we see. Vygotskian methods presented problems for children to solve but with the addition of cues e.g. The blocks test asks children to sort blocks into categories with odd names and observes how they develop the meaning of the categorisation. Children’s self-talk explores how Vygotsky saw the disappearance of self-talk around 4-7 represented the internalisation of the concepts. This generally reappears when difficult tasks are encountered even later in life (Smith, 2007). Social-constructivist interventions Looking at talk within classrooms, Lyle (2008) noted that 90% of it consisted of closed responses in the Initiate, Response, Feedback pattern. Mercer (1995) considered the types of interactions that were used, with Warwick (2013) finding that exploratory talk was the most useful. Mercer (2014) went on to develop interventions aiming to teach the styles of speech that were required to develop collaborative working. Mercer (2006) found that these interventions improved their performance over a range of topic areas i.e. as Vygotsky would have it, developing their social skills affected their thinking skills more generally.
Measuring beliefs about epistemology begins by highlighting the teachers’s beliefs about epistemology affect how they teach. Thus a Piagetian approach will assume that children develop in set stages whilst a Vygotskyian one will emphasise social learning. Self-report questionnaires are generally used to explore the impact of these beliefs. Self-report questionnaires are developed starting from a literature review before a pilot then moving on to factor analysis. Epistemological Questionnaire (EQ) (Schommer, 1990) used a Likert scale on a range of questions such as ‘Successful students understand things quickly.’ From this, the factor analysis came up with four factors with good reliability: Fixed ability, Simple knowledge, Quick learning and Certain knowledge. Hofer’s Epistemological Beliefs Questionnaire (EBQ) groups the factors into the nature of knowing (what knowledge is: the certainty and simplicity of knowledge) and the process of knowing (how you come to understand knowledge: the source and justification of knowledge). Erdamar and Alpan (2013) used this to consider the belief systems: fixed and certain, coming from authority figures through to complex and the need to put effort into learning. However, Schraw (2013) found that the EQ didn’t cover everything and seemed to be and went on to develop Epistemic Beliefs Inventory (EBI) to address the shortcomings. Others such as Tümkaya (2012) have gone on to add demographic and personal details producing a three factor model: ‘the belief concerning that learning depends on effort’, ‘the belief concerning that learning depends on ability’, and ‘the belief concerning that there is one unchanging truth’. As always, there is a cultural element to this and Chi-Kin Lee (2013) using EQ in China found an authority/expert factor not in the original EQ. What is culture though? Tümkaya (2012) considered university students from different faculties finding that social science students emphasised the importance of learning on effort and context whilst medical students emphasised innate ability. The case of inclusive education introduces the idea of key word signing as a way of supporting the communication skills of those with severe learning difficulties e.g. the use of Signalong which is based on British Sign Language. This is key word signing rather than a fully developed sign language. Some evidence suggests that a social-constructivist approach works best in inclusive class rooms (Mercer, 2009) which in turn implies that those teachers would have an epistemology of social-constructivism but few studies have considered this (Florian and Black-Hawkins, 2011). Sheehy and Budiyanto (2014) considered this in Indonesia (mainly on the videos). Pompeo (2011) indicated that reflecting on one’s epistemological beliefs can help improve them and hence social science students tend to be more sophisticated in terms of epistemology than science students.
Reference: Sheehy, K. (2016). Developmental psychology: cognitive development and epistemologies. In Ness, H., Kaye, H. and Stenner, P. (2016). Investigating Psychology 3. Milton Keynes: The Open University.Copyright © 2004-2014 by Foreign Perspectives. All rights reserved.
DE300: Investigating Psychology 3: Language, thought and culture
Chapter 4 Language, thought and culture runs to about 50 pages and is one of the optional chapters on the first TMA.
Introduction this is a very brief introduction to what is to follow in the chapter, touching on the concept of language, moving on to concepts and the idea that the speakers of different languages actually think differently and some difficulties in the use of language in experiments.
What is language? Aitchison (2008) points out that all normal human beings speak and that we have come to consider language as something that only humans do. However, Clarke (2006) found that gibbons in Thailand also employ a form of speech, albeit a somewhat simpler one than typical human languages. Clearly English has a great deal more vocabulary than the gibbon language but it also has a grammar which non-human languages don’t (Sampson, 2009). Aitchison (2008) identifies a range of characteristics which languages possess: 1) a vocal-auditory channel (although other channels can be used e.g. braille uses touch), 2) it is arbitrary i.e. the symbol used to represent an object does not resemble the object, 3) it has a semanticity i.e. the symbols used are generalisable e.g. we can refer to a specific dog or dogs in general, 4) spontaneous usage, 5) we employ turn-taking, 6) it has a duality thus the letters in dog only form the symbol for dog when combined, 7) human languages use cultural transmission i.e. we must learn languages whereas birds develop songs even when raised in isolation (not an experiment that would get ethical approval with humans!), 8) we can use displacement i.e. talk about things that aren’t happening here and now, finally 9) structure-dependence, 10) creativity and 11) we can mind-read are all considered human-only language features. Chomsky (1957) in particular stressed the structure-dependence (i.e. grammar) aspect and noted that it is easy to produce grammatically correct but nonsense sentences and also that many of our sentences are unique. The ability to anticipate intentions isn’t entirely limited to humans as Tomasello (2010) highlighted with his example of chimpanzees passing food to humans but noted that they don’t tend to form joint goals. Warneken (2006) demonstrated that toddlers and human-raised chimpanzees would co-operate in goal directed activities but the chimpanzees did not participate in social games without a goal in mind and moreover did not attempt to re-engage the humans who had withdrawn from the activities.
What are concepts made of? Introduces the idea of concepts as mental categories which have a series of attributes which are necessary and sufficient. In particular in this classical view, anything having those attributes is just as good an example of the category as any other thing with those attributes. Prototype theory takes Rosch’s (1973) idea that some exemplars are better examples of the concept than others e.g. an orangey-red isn’t as good an example of red as a “proper red” and similarly some fruits are better examples of fruits than others e.g. an apple is a better examplar than an olive. As always, the experiment had some limitations, in particular what does it mean to be a good example? Did that just mean to some participants how enjoyable it was rather than how typical it was? Also apple is a much more commonly used word (“A is for Apple”…) though Mervis (1976) ruled this out. Other typicality effects such as the estimation of the chance of a cross-species infection have been found (Rips, 1975) and Mervis (1980) found that children acquire vocabulary in order of typicality. Why this typicality effect exists was found to the more typical exemplars having more features of the category e.g. a robin is clearly a more typical bird than an ostrich and less typical exemplars overlap with other categories (e.g. bats aren’t great examples of mammals) (Rosch and Mervis, 1975). The best examples of a category are called prototypes and have all the required attributes but none of those from other categories. Kurbat’s 1994 study using images showed that typicality wasn’t confined to words and meanings and Kempton illustrated the cultural differences in prototypes using boots finding army boots worked in the UK and cowboy boots worked in the US. The knowledge approach Murphy’s knowledge approach (2004) contrasts with Roche’s prototype theory in considering that concepts are richer than simple dictionary definitions. Barsalou (1983) illustrated this by using ad hoc categories such as “ways to avoid being killed by the Mafia”. Stanfield and Zwann (2001) found that the concepts had other properties so that sentences with nails hammered into walls got a swifter response when the nail illustration was horizontal than when it was vertical i.e. the orientation of the nail was part of the concept of something being hammered into a wall. Hampton (1987) found that there was feature cancellation so that in asking for pets that are also birds, migration wasn’t mentioned as a property. Fodor (1998) noted that prototype theory implies that a prototypical pet fish should be cuddly as prototypical pets are. Conceptual combination was investigated by Keil (2000) who found that emergent features were associated with phrases but not to the underlying words e.g. arctic bicycles had spiked tyres yet neither arctic nor bicycle had. The knowledge vs typicality argument was found by Proffitt (2000) where tree experts estimated the likelihood of a disease being transmitted between species whereas Rips (1975) found in non-experts it was typicality that dominated in estimations of transmission of bird disease, as one would expect. Smith and Sloman (1994) found that knowledge was used when the participants had to give reasons for their choices.
Do speakers of different languages think differently? Introduces the idea that the language which we use to speak influences, but does not determine, what we can think. Language effects on colour discrimination considers whether the words that we have in our language for colours influences the colours we can see e.g. Russian has words for blue and light blue comparable to the English red and pink. Franklin (2005) considers that the colours are hard-wired whereas Goldstein (2009) considers that they are influenced by the language that we use. For example, Himba doesn’t have words for blue and green. As it turns out, Goldstein (2009) found that they could distinguish them but at the blue-purple and green-blue ranges they behaved like English children who didn’t know the names for the colours although there are issues around the environment in which they are raised: Himba are in a desert with limited colour range whilst the English children see all kinds of colours. Winawer (2007) found an advantage for Russian speakers in distinguishing light-blue from dark-blue (for which they have different words) over English speakers. Language effects on more abstract concepts considers whether this colour effect can be extended to more abstract concepts such as time. This goes into some detail on the experiments that Boroditsky (2001, 2011) conducted to examine Mandarin and English speakers way of thinking about time (vertically vs horizontally) and found that there are differences but this could be a function of experience rather than a function of language experience.
Does perception influence thought? Does language experience affect cognition or is it that language arises out of the perceptual experience? Barsalou (1999) went even further, suggesting that the cognition is accompanied by the experience e.g. if you think of the word “up” then the areas of your brain carrying out the actions are also activated I.e the cognition is embodied. Traditionally cognition is regarded as abstract i.e. disembodied (Kaye, 2010). This has transducers translating into a domain-specific modality (audio, visual, etc.) and then an amodal central processing function. There is evidence that the embodied (i.e. grounded in physical experience) is the way that it works (e.g. egg and chips vs chicken and egg). Glenberg and Kaschak (2002) used an ‘action-sentence compatibility effect’ to demonstrate this e.g. ‘Joe sang the cards to you’ doesn’t make sense whereas ‘You gave the earring to Susan’ does. Borghi et al. (2004) used an inside/outside metaphor e.g. fuelling the car (outside), driving the car (inside) using probe words such as tyre and steering wheel, it being easier when the word matched the location (i.e. fuelling the car going with tyre); this also worked with shapes e.g. flat palm went with smoothing the table cloth.
Reference: Kirkbride, S. and Smith, M. C. (2016). Language, thought and culture. In Ness, H., Kaye, H. and Stenner, P. (2016). Investigating Psychology 3. Milton Keynes: The Open University.Copyright © 2004-2014 by Foreign Perspectives. All rights reserved.
DE300: Investigating Psychology 3: Memory in the real world
Book 1, Chapter 3 Memory in the real world runs to around 50 pages and is one of the three optional chapters on the first TMA.
Introduction highlights some of the difficulties of experimenting in real-world experiments e.g. lack of controls, lack of objective facts to compare the memories against and the ethical issues. What can experiments tell us about remembering falsely? The approach in experimentation follows an encoding phase, provision of post-event information (which may be false) and retrieval. Loftus has explored the provision of misleading information through leading questions, discussions with co-witnesses and been able to influence the remembering of childhood incidents that didn’t happen and even short term food preferences. Chandler (2001) found that these false memory effects were temporary i.e. that the original memory was retained. The effectiveness of the false memory was found to depend on how plausible it was (Walther and Blank, 2004). Taking false memory into the laboratory starts by discussing the Deese–Roediger–McDermott (DRM) concept of inducing false memories implicitly e.g. inducing participants to remember that they heard “bed” when the initial list was duvet, pillow, sheet, etc. Zhu (2013) found that the underpinning mechanism in DRM false memories and explicit ones appeared to be different.
Laboratory experimentation points out that although psychology laboratories may be essentially normal offices, the environment remains an artificial one. A laboratory experiment on the other-race effect reports on Anzures (2014) study of children’s recognition of faces from other races which found that there was no statistical difference in recognition from 5 to 10 year olds although they did recognise Chinese faces less accurately. Extrapolating to the real world points out a number of limitations to Anzures experiment: it was artificial setting, it used artificial stimuli (e.g. it was a 2D photo), the task was artificial (2AFC, in rapid succession), it used an artificially short time-span, it utilised explicit memory and the consequentiality and motivation was clearly quite different than in a line-up situation. Thus, on the whole, ecological validity was somewhat lacking: as Gibson (1979) illustrated, a picture of a pipe is not itself a pipe.
Face recognition introduces the concept that face recognition may involve a special type of memory. Are faces special? Our exposure to examplars of the category face is clearly much greater than for other categories and moreover, whilst we don’t need to distinguish between individual pineapples, we do need to distinguish between individual faces. But are we treating faces in a fundamentally different way? Face specificity or expertise? Introduces Prosopagnosia (the inability to recognise faces) and points out that people can have that whilst being able to recognise everyday objects or vice versa (Farah, 1991, McMullen, 2001). Although this seems to imply that faces are recognised differently, it could equally be that the damage was to areas involved in more general processing, e.g. memory of fine detail. Yin (1969) noted that when objects are presented upside down, everything except faces is recognised, which he suggests is evidence that face recognition is a different type of process. However, Bruyer and Crispeels (1992) showed that it was more an aspect of familiarity with the exemplars than specifically of faces that differentiated the upside down slowing down of recognition. This expert effect has been demonstrated in training (Rossion, 2002) but not with experts on birds (Gauthier, 2000) and other categories. Familiarity in face recognition discusses the different quality of recognition that comes with familiarity: we can recognised friends immediately even after many years but have difficulty in picking out someone who we have not seen a great deal: a familiarity effect that applies equally to groups. Biases in face recognition considers the Other Race Effect (ORE), the relative difficulty in recognising faces from different races. Brigham and Malpass (1985) showed that this was an aspect of familiarity. There is also some evidence of an Own-Age Bias (OAB), although this seems less consistent. Bartlett and Leslie (1986) showed that younger participants recognised faces around their own age better than they recognised older ones but that the older ones didn’t have that bias, although there are issues around their age banding. Other studies with tighter banding have shown the effect at all age bands (Perfect and Moon, 2005 and others). The level of contact is considered the deciding factor with the cognitive approach of Hancock and Rhodes (2008) coming down on the level of experience and essentially training being the decider. However, it could be argued that it is a social categorisation effect that determines how we process the face e.g. the categorisation-individuation model (CIM) (Hugenberg et al., 2013) which suggests that the categorisation happens first with only the in-group being considered at an individual level. Practical implications of biases in face recognition illustrates that this ranges from embarassment to potentially major issues in line-up identification. This is something of a problem as eyewitnesses are believed about 70% of time whether or not they seem reliable (Loftus, 1983). Whilst the laboratory studies have a lot of power (lots of participants, each with lots of data points), they are severely lacking in ecological validity both in normal life and in line-up situations.
Eyewitness evidence just points out how crucial effective eyewitness evidence can be. Identity parades (line-ups) starts off by describing the simultaneous and sequential line-up procedures with Stelbay’s (2001) finding that the sequential line-ups were more accurate. However, McQuiston-Surrett (2006) found that sequential line-ups were only more effective when the perpetrator wasn’t present i.e. they reduced the chance of identifying a suspect who was innocent. In simultaneous line-ups, the person who looks most like the perpetrator may be chosen even when they are instructed that the perpetrator may not be present due to pressure from the situational context (Mermon, 2003). By contrast, in sequential line-ups, they are forced to make absolute decisions rather than relative ones. The mystery man procedure introduces the idea of having a mystery-man in the line-up specifically for children so that they can select the mystery-man as a positive “don’t know” selection which overcomes the pressure that children feel to make any selection in the situation (Havard and Memon, 2013). Applied memory experiments highlights the differences between Anzures (2014) experiment which used large numbers of images but required a forced choice seconds later vs Havard and Memon (2013) which used a video viewed once and tested recognition a few days later i.e. having much higher ecological validity. What affects eyewitness evidence? Considers the effects of estimator variables (those outside the control of the criminal justice system such as those to do with the witness and the characteristics of the crime e.g. lighting levels, distance of the witness from the action) and system variables (those under the control of the criminal justice system such as the nature of the line-ups and questioning). Wagenaar and van der Schrier (1996) suggested a Rule of 15 which states that for accuracy the limits are 15m and 15 lux. Even in ideal circumstances, Flin and Shepherd (1986) found that there was a tendency to underestimate above average characteristics and over estimate below average ones i.e. there was a tendency to average out. Cutler (1987) found that the presence of a weapon reduced the accuracy of identification still further. To improve accuracy, sequential line-ups can be used and specific instructions rather than just asking who it is (Cutler et al., 1987).
Reference: Harrison, G., Ness, H. and Pike, G. (2016). Memory in the real world. In Ness, H., Kaye, H. and Stenner, P. (2016). Investigating Psychology 3. Milton Keynes: The Open University.Copyright © 2004-2014 by Foreign Perspectives. All rights reserved.
DE300: Investigating Psychology 3: Investigating memory: experimental and clinical investigations of remembering and forgetting
Book 1, Chapter 2 Investigating memory: experimental and clinical investigations of remembering and forgetting runs to 40 pages and is one of the three chapters that are options on the first TMA.
Introduction re-introduces the encoding, storage and retrieval model of memory and points out that there are different types of memory which are considered in more detail throughout the rest of the chapter.
What types of memory are there? This begins by introducing the idea of remembering as a form of mental time-travel (Tulving, 2002) made up of a series of episodes and autobiographical events. There is a discinction between declarative memory (i.e. memory of events) and non-declarative memory (i.e. memory of processes). The episodic memory is linked to the hippocampus and medial temporal lode (MTL) (as evidenced through brain damage and observational/experimental studies). Memory malfunction: the evidence from neuropsychology and amnesia goes on to consider what happens when something goes wrong in particular examining the cases of Clive Wearing and Henry Molaison who, through losing the ability to lay down new memories, are essentially always living as though they had just woken up. Although in both cases their episodic memory is gone, they retain their procedural memory so CW can still play musical instruments. This anterograde inability to create new memories contrasts with retrograde amnesia which is the loss of memories previously laid down. The general lack of loss of semantic memory (i.e. language or intellectual impairments) implies a separation of these from episodic memory (Tulving, 1985). All cases are different as a consequence of the differing causes and different brain regions affected. Memory brain regions: the key role of the hippocampus begins by pointing out that since the damage to CW’s brain was to the hippocampus and since he both retained knowledge of events before the damage and was able to carry on normal conversations (thus retaining short term memory), these functions must take place outside the hippocampus (Milner). Moreover, he could still acquire new vocabulary and new semantic information (albeit inconsistently) (Corkin, 1984, 2002). Other damage in areas adjacent to the hippocampus points to these having an role in laying down long term memories (e.g. the fencing foil incident). Converging evidence for the role of the hippocampus in memory starts with Krebs (1989) study of birds that found those with larger hippocampuses were better at remembering where they stored food with Sherry and Hoshooley (2010) finding that it was largest at the times of year when chickadees stashed their food. O’Keef (1976) found that rats build an internal map of their enclosures using place cells and head cells which acted as direction indicators. Maquire (2000) found a similar effect in London cab drivers: the hippocampus increased in size with experience in the job (would this still apply now that GPSs are used?). Huppert and Piercy (1976, 1978) highlighted the difference between familiarity and recognition in their experient with Korsakoff patients: both normal and afflicted patients had very similar scores on familiarity but quite different ones on recognition of having seen the images the same day or the day before. Shimamura and Squire (1987) referred to this difference as source amnesia i.e. a difficulty of recalling when things happened. Korsakoff patients also exhibit confabulation, the remembering of things that didn’t happen (Moscovitch, 2002) which may be related to déja vu. Malfunctioning mental time travel: retrograde amnesia and the temporal gradient introduces the concept of retrograde amnesia i.e. the loss of memories predating the damage which tends to have a temporal gradient with earlier memories being more resilient. Consolidation theory (Squire, 1992), proposes that after a time in the hippocampus the memories are consolidated elsewhere. Takashima 2009) illustrated this and it has been shown through damage to the hippocampus largely affecting recent memories rather than distant ones (Squire, 1992). That said, there is a remineniscence bump in autobiographical memories around the teens and twenties. Autobiographical memories are difficult to test for accuracy and may have a social or family memory effect. Knowing what you don’t know you know: explicit versus implicit memory picks up the idea of explicit memory (what happened when) and implicit memory (procedural). For example, Henry Molaison (HM) could improve his performance on a motor task yet couldn’t remember practicing it. In the case of Parkinson’s this is reversed i.e. they have episodic memory but can’t learn new processes.
Testing memory: a few reflections starts off by noting that memories consist of events that have an item, a time and a place and that usually we will have some cue to trigger the recall. Laboratory tests are generally either free recall, cued recall or recognition tests. Standing (1973) showed that recognition is very easy to do. Huppert and Piercy (1976, 1978) suggested that it is the links between event and times is broken in amnesia i.e. it is in the reconstruction that the problem occurs. Notably, impaired familiarity with normal recollection doesn’t happen. Note that serial recall (recalling items in order) is harder than free recall for normal adults but the reverse applies in Altzeimers patients (Cherry, 2002). Familiarity versus recollection: neurological correlates notes developments in memory research on ageing with the latest (Tree and Perfect, 2004) indicating that it is the linking of source and item that is lost on ageing i.e. you’ll know something but not be able to relate it to when it happened. Thus Cohen and Faulkner (1989) found that there were difficulties in source based information.
Modelling memory introduces two different memory models. Model 1: Aggleton and Brown’s (1999) neural model of episodic memory posited that memory had system 1 that in the hippocampus/mammilliary bodies/thalmic regions that dealt with episodic information (explicit memory and recollection) and system 2 in the MTL which dealt with familiarity i.e. context free memory. Squire (2000) and others have argued that there is no functional difference and that the whole MTL is used in memory. It’s supported by Korsakoff patients who have pre-frontal cortex lesions and who do well on familiarity tests but poorly on others (Ranganath and Knight, 2003). Neuropsychological evidence for the two-system proposal underpinning recall and recognition is supplied by Mayes (2002) who had a patient with hippocampus damage performing well on recognition but poorly on recall. Neuropsychological evidence against the two-system proposal Chan (2002) had a similar patient who performed poorly on both recall and recognition as did Squire (2007). Davachi and and Wagner (2002) present fMRI evidence showing that there is always some hippocampus activity in memory. Model 2: Baddeley and Hitch’s working memory model took account of the recency effect in memory and proposed that all incoming information is held in short term memory before being transferred to longer term memory by way of a working memory (Short Term Storage, STS) which had a limited capacity. Baddeley and Hitch (1974) using a dual task experiment showed that working memory and STS weren’t the same thing. From this, they proposed that there were multiple STS areas (e.g. verbal, image). Baddeley (1975) showed that there was no difference in the number of short or long words that could be remembered but only if they were presented visually. Modes include audio, visual and spacial (Baddeley, 1980). An important part of this model is the central executive which Baddeley proposed as the mechanism by which focus, task switching and prioritisation was achieved. Alzeimer’s patients are unable to perform tasks concurently even when they are at an appropriate level of difficulty. Finally there is the episodic buffer which can hold around four parts of information (Allen, 2012) and enables us to remember a sentence when ordinarily we can only remember around two seconds of random words.
Reference: Kaye, H. and Tree, J. (2016). Investigating memory: experimental and clinical investigations of remembering and forgetting. In Ness, H., Kaye, H. and Stenner, P. (2016). Investigating Psychology 3. Milton Keynes: The Open University.
Copyright © 2004-2014 by Foreign Perspectives. All rights reserved.
Exploring Psychology: Person Psychology: Psychoanalytic and Humanistic Perspectives
Chapter 9 is on Person psychology: psychoanalytic and humanistic perspectives. It runs to just under 50 pages and is the fourth exam chapter. The topics here are covered in somewhat more depth in D171 Introduction to Counselling.
This is quite a complex chapter which looks first at psychoanalysis then at humanistic approaches before finally comparing the two.
This harks back to the identity chapter in some ways, looking at who we are and how we got to be that way, focusing on subjectivity and our inner selves in general which is somewhat against the grain of present day psychology’s attempts to become more scientific and objective.
Psychoanalysis is very much dependent on the ideas and techniques of Freud which developed out of the deep self-analysis that he conducted towards the end of the 1800s. From this three broad themes emerge: 1) the importance of unconscious feelings and emotions, 2) their origin in early childhood experiences and 3) the importance of unconscious anxiety and inner conflict (psychodynamics). To explore the unconscious he initially used hypnosis but moved on to free association and dream analysis (which tries to map the symbolism within dreams onto real-life objects [see The interpretation of Dreams, 1900]; there are all kinds of issues with this.). In terms of early childhood experiences, he sees us as moving though various levels of pleasure starting with the oral stage (pleasure from sucking 0 to 2 years), moving on to the anal stage (pleasure from pooing at 2-4) before we reach the phallic stage (from 4 to adolescence) with subsequent relationships incorporating, for example, the oral stage through kissing. Oedipal conflict arises during the phallic stage when boys unconsciously find their fathers to be a source of competition for their mother’s affections but it’s been suggested that this really came from
his Jewish background where his father would have been aloof. His account of female development as a consequence of penis envy seems more than a touch flaky. Moving into adulthood the earlier relationships can exhibit transference to adult ones. Allied to that is the idea of fixation on an earlier stage of development e.g. an over-reliance on oral gratification through being fixated on the oral stage leading to chewing sweets, drinking or talking. Psychodynamics moves us on to the consideration of three levels of self and the conflicts that can arise between the id (the basic desire to satisfy biological needs), the ego (the reality testing perceptual level) to the superego (the moralist highest level) e.g.
the id may want sexual gratification, the ego rails that back from a fear of punishment whilst the superego throws guilt into the mix. This conflict leads to angst, is managed through repression and displacement or projection onto another person. These defence mechanisms to avoid internal conflict are largely unconscious e.g. forgetting to pay an annoying bill, or projection of anger onto a doll for young children. There are variations of psychoanalytic theory 1) varying in the driving force e.g. object relations rather than sexuality, 2) variations in how early childhood develops and 3) the role which society plays.
Humanistic psychology considers more of an existential approach i.e. we exist, are conscious and have choice (autonomy) and allows for personal growth. This is centred on our conscious experience of the events going on around us but is an experience which we generally are unaware of, something that makes it difficult to study. Maslow (1973) picked up on the idea of a peak experience: a feeling of delight, meaningfulness and wholeness. Csikszentmihalyi (1992) picked up on flow experiences, the total involvement in something. Kelly (1955) described personal constructs as the way in which we look at the world, consisting of a range of bipolar aspects (e.g. friendly-cold, stimulating-dull) which he displayed in a repertory grid for an individual that enabled him to model the way that person looked on the world e.g. if they used happy-sad in a similar way to lively-reserved it might indicate how they related to others. Very rigid constructs would indicate that the individual may have difficulties in relationships. He considered that our experiences are open to reinterpretation: constructive alternativism. Extentialists consider that we have situated freedom i.e. we have a great deal of freedom to choose who we wish to be, albeit situated within a range of constraints; they refer to acknowledgement of this situation as authentic. We all have Frankl’s will to meaning, the feeling of importance in finding a purpose and direction for our lives, through actions, experience, love or fortitude. Moving along, Maslow (1954) introduced his model of needs ranging from physical at the bottom of his pyramid to self-actualisation at the top although there are issues with his selection of people e.g. those making full use of their talents would be likely to devote themselves to this work. Rogers looked more at how we might reach self-actualisation through personal growth. He considered that our sense of self rests on our own experience and our evaluations by others and developed person-centred counselling to get around the problems of conditional evaluations by others and operates by way of unconditional regard. Humanistic psychology takes a holistic approach which encompasses methods such as encounter groups, gestalt therapy (lots of role-playing) and psychosynthesis with current developments such as positive psychology.
How do psychoanalysis and humanistic psychology compare? In looking at subjective experience, psychoanalysis considers the unconscious and uses a lot interpretation whereas humanists considers the conscious and analyses the information e.g. through repertory grids. In terms of autonomy, psychoanalysis considers that we are a result of our childhood experiences whereas humanists consider that we have a lot of opportunity for personal growth and change. To change, psychoanalysis reveals how we got to be at this point whereas humanists consider that we are our own agents in getting here (psychoanalysis would say that without the deep understanding, changes will be superficial). Criticism of psychoanalysis is mainly in terms of it being subjective and non-scientific.
For the exam, the key topics for this chapter are: