| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

STMandLTM

Page history last edited by PBworks 17 years ago


Human Memory: Short term and long term memory

 

Introduction

Along with processes such as learning, perception and attention, memory is a very crucial aspect of cognition. Indeed, when we ask precisely what ‘memory’ means it is impossible to define it independently from these other processes.

 

There is also a long history of scientific investigations into human memory, beginning with Ebbinghaus (1885), but the dominant view of memory has become the information processing approach. This suggests that memory can be best understood in terms of three essential stages of that involve the flow of information through memory: registration, storage and retrieval.

 

Research into the nature of memory

Most psychologists make a distinction between different types of memory — particularly short-term and long-term memory. There is also a third type, the sensory register, which enables information to be stored very briefly so that feature and pattern recognition processes can operate.

 

Short-term memory and long-term memory are usually distinguished in terms of capacity, duration and coding (or encoding).

 

Capacity of STM

In a famous paper, George A Miller (1956) suggested that the capacity of STM was 7, plus or minus 2 items (7±2). This conclusion is typically supported by means of a digit-span experiment such as this Activity.

 

While investigations such as this support the idea that the capacity of STM is limited, determining exactly what the capacity of STM is has proved rather difficult. There are two basic reasons for this:

  • Different psychologists mean different things by the term capacity. For some it is how much the system can hold, storage capacity, while for others it is how much can be done on the information in storage, attentional capacity. If STM is seen as a kind of workbench then storage capacity refers to how many items can be placed on the workbench. Attentional capacity however is how many items can be worked on at any one time.
  • Determining what is meant by an item is difficult. Individual units can be grouped together to increase the capacity. For example, numbers can be grouped as dates, letters as words and so on. Each of these meaningful units is referred to as a ‘chunk’ and through chunking the capacity of STM can be increased significantly.

 

Coding in STM

The sensory register has no real organisation and so information is stored in more or less the same form as it arrived. However material in STM is highly transformed. Thus, no matter whether we read or hear a word, it will be stored in an acoustic form. Conrad (1964) demonstrated coding in STM.

 

Duration of STM

 

In a famous experiment Peterson & Peterson (1959) investigated the duration of STM. They asked participants to recall strings of consonants (e.g. FBK) selected so as to be difficult to pronounce. Recall delay was set to 3, 6, 9, 12, 15 & 18 seconds during which rehearsal was prevented by participants counting backwards in threes from a target number (e.g. 397). Each subject was tested a total of 8 times at each of the 6 delay intervals.

The findings of the study showed that while after a 3 second retention interval trigrams about 90% of trigrams were recalled, after 18secs only 10% were. The duration of STM without rehearsal is therefore very short.

 

The Peterson and Peterson technique has a number of positive features, including the very effective method used to prevent rehearsal. To see why, consider what the retention would be if this had not been used. However, the main criticism of the study is normally directed at their conclusion that forgetting from STM is due to decay. In fact, interference (retroactive) from the verbal counting task is just as likely an explanation of the forgetting.

 

Characteristics of LTM

Measurement of the capacity of STM memory is difficult enough, but estimating that of LTM is probably impossible. All we know for sure is that the capacity is very large and may have no upper limit. This huge capacity means that LTM must have a highly organised structure, otherwise retrieval would be much more difficult than it actually is.

In terms of coding in LTM, this is normally considered to be semantic. Baddeley’s (1966) study described earlier shows this, as performance over longer retention intervals on the semantically similar words (BIG, HUGE) is worse than that for acoustically confusable words.

Measuring the duration of LTM is not as straightforward as it might appear. It is no use just asking people what they can remember from the past as we can never be sure whether the information was coded accurately in the first place or whether it was revised later. The results of Ebbinghaus’s studies suggested that memories decayed fairly rapidly, but he used nonsense syllables and even after a month he could still recall some of them. A more realistic estimate of the duration of LTM is the study by Bahrick and his colleagues (1975).

 

The multi-store model of memory

The idea that memory is made up of a number of separate stores goes back at least to the 19th century when William James first made the distinction between a short and longer term store in memory. He used the terms primary and secondary memory. Almost one hundred years later, Atkinson and Shiffrin incorporated the idea of short-term memory (STM) and long-term memory (LTM) in their model of memory. They suggested that these two stores, along with a very short lasting sensory memory (SM), are the basic structural components of our memory system. Note that they did not suggest that the stores were identifiable physical (i.e. physiological) structures, rather that it made sense to think of memory as if it had these stores. In this sense they are known as hypothetical constructs rather than real entities. The three components in the multistore model are related by a number of control processes, most notably rehearsal. This has the function of maintaining information in STM beyond the normal duration limit of 30 seconds or so. It also has the role of transferring information from STM to LTM.

 

Evidence for the multi-store model

  • The multistore model receives support from a number of experimental investigations, especially those into the primacy and recency effect (free-recall).
  • Clinical studies of amnesic individuals add a further dimension to the support for a multistore account of memory.

 

Problems with the multi-store model

While the MSM has been very influential in psychology, it is not free from criticism. A number of experimental and other findings do not fit with the neat and simple idea of structurally distinct stores.

  • Interpreting free-recall experiments. Tzeng (1973) followed the basic procedure of the free-recall experiments conducted by Glanzer & Cunitz, but used an interpolated task (i.e. counting backwards in threes) after each item rather than at the end of the list of words. To his surprise, he found that both primacy and recency effects were still observed. The multi-store model suggests that neither primacy nor recency effect should occur since the interpolating task will effectively remove each item from the STM and there should be no transference to LTM. Additionally, long-term recency effects have been observed. For example, Baddeley & Hitch (1977) showed that rugby players, when attempting to recall the names of players that they had played against earlier in the season, showed clear evidence of a recency effect. The more recent the game the more players’ names that could be recalled. This undermines the idea that the recency effects in free-recall studies are due to the items being stored in STM.

 

  • More studies of amnesiac patients. KF, who suffered a motorcycle accident causing damage to his left parietal occipital region of the brain had a number of interesting deficits that have been intensively studied (e.g. Shallice & Warrington, 1970). KF showed very poor digit span (usually less than 2 items), but good performance on tasks that seemed to indicate an intact long-term store. For example, he was still able to store new information. In fact he could learn a 10 word sequence in fewer trials than normal controls and still retained seven of the 10 items some months later. The multi-store model predicts that this should not be possible since an intact STM is required to transfer information to LTM.

 

  • Role of rehearsal. We have enormous amounts of information in LTM even though we have probably not rehearsed much of it. Despite the important role that rehearsal is supposed to play in the multi-store model, studies actually show a low correlation between the amount of rehearsal and how much information is recalled. Glenberg et al. (1977) had participants study a 4 digit number for two seconds and then rehearse a single word for varying lengths of time (2, 6 or 8 seconds). Each subject did a total of 64 trials, expecting to be tested on the numbers at the end. Instead they were asked to recall the words. The researchers found that recall was generally rather poor (12% or less) but had little relationship with the amount of time spent rehearsing in that words rehearsed for 8 seconds were no more likely to be recalled than those rehearsed for 12 seconds.

 

Working memory: a new view of short-term memory

‘Working memory’ is not just another term for STM. The two concepts differ in a number of ways, principally in that STM is no longer seen as a unitary ‘box’ that stores information in an acoustic or verbal form. Nor is it a passive store with a fixed number of slots to hold chunks of information as envisaged by the multi-store model. Instead theorists like Baddeley prefers to imagine short-term memory as a collection of cognitive processes that can engage in a number of quite separate cognitive processes on incoming material. It is more like a workbench, where material is constantly being handled, combined and transformed.

According to the original model (Baddeley & Hitch, 1974) there are three components of the working memory. The basic relationship between these systems is shown in the following diagram:

  • The central executive is responsible for directing the activity of the ‘slave’ systems, two of which are shown in the diagram. It works like a supervisor or scheduler, deciding which issues require attention and which can be ignored. However it has a very limited capacity – it can’t hold much information or hold it for very long.
  • The phonological loop, as the name implies operates with acoustic information (sounds). It has two aspects: a phonological input store where incoming information is stored in an acoustic code, and an articulatory rehearsal process (subvocal speech). The phonological store has a very brief duration (1½-2 secs.) but can be refreshed by the reading of the trace into the articulatory control process. This aspect of the model is has similarities to Atkinson and Shiffrin’s STM (remember that this could also maintain information by rehearsal). The phonological loop may play an important part in learning words, language comprehension (especially complex sentences) and in learning to read.
  • The visuospatial scratch (or sketch) pad, on the other hand, stores visual and spatial information — much like a piece of paper can be helpful in sketching out a geometry problem. It also has a limited capacity and short duration, but can be used independently of the articulatory loop.

 

Evidence presented by Baddeley for his working memory model is complex. However, we will consider two areas in detail.

 

The unattended speech effect

It has been known for some time that retrieval of visually presented material such as numbers can be disrupted by the simultaneous presentation of spoken words (even if participants are told to ignore the words). Even meaningless items such as nonsense syllables or words in a foreign language have this effect. Baddeley argues that the unattended material was gaining access to phonological loop and interfering with memory. This interpretation is reinforced by the fact that the sounds have to have some relationship to language (spoken numbers, foreign words, nonsense syllables, etc). White noise does not interfere with recall. It also doesn’t matter how loud the interfering task is, so long as it can be heard phonological material will be disruptive.

What about music? Baddeley & Salamé (1989) found that when participants were asked to recall sequences of visually presented digits against a background of vocal or instrumental music the disruption of vocal music was approximately the same as that produced by unattended speech. This was the same whether the vocal music was classical or pop. With instrumental music the effect was present but less marked. There is a clear implication that studying while listening to material that is meaningful such as vocal music may impair retention and/or comprehension.

 

Articulatory suppression

The other demonstration of the phonological loop is the phenomenon known as articulatory suppression. Performance on a digit span task is significantly impaired when the subject is asked to utter a stream of irrelevant sounds (such as saying ‘the’ over and over again). Baddeley explains this effect by suggesting that saying the irrelevant item dominates the articulatory control process, preventing it from maintaining or coding information in the phonological store It may also produce an unintended speech effect by feeding the irrelevant spoken material into the phonological store.

It could be that suppression impairs performance simply because it forces us to divide our attention between two tasks (Parkin, 1988). However, Baddeley notes that the same impairment is not produced if a non-articulatory task such as tapping is used (Baddeley et al 1984).

 

The levels-of-processing (LOP) model

This influential theory of memory is often seen as the main alternative to the multi-store model. It was first proposed by Craik and Lockhart in the 1970s. They suggested that memory is not three or indeed any specific number of stores, but instead varies along an infinite number of levels depending on the depth of encoding. The strength of a memory trace does not depend on the type of store within which it is located, but on how much attention is paid to the information at the time of encoding. Deep, meaningful kinds of information processing lead to more permanent retention, than shallow, sensory kinds of processing.

Depth is defined in terms of the amount of meaning extracted from the stimulus rather than on the number of analyses performed on it. This suggests that straightforward rehearsal through repetition may not be the best way of remembering, more elaborate strategies are more effective.

 

Evidence for the LOP approach

A number of research studies support the LOP approach. Generally speaking experiments manipulate the depth of processing and see the effect on recall. A typical example is Craik & Tulving’s (1975) experiment. In such experiments, because participants are not usually aware that they are taking part in a memory experiment, the learning is incidental.

Another study (Hyde & Jenkins, 1973) gave several lists of words to participants and asked them to complete tasks differing in the depth of processing involved:

  • Rating the words for pleasantness of meaning (i.e. deep processing – semantic)
  • Detecting every occurrence of the letters e and g (shallow processing – visual)

The recall of words processed for pleasantness was much better than when detecting letters, consistent with LOP theory. Interestingly, this was true whether the participants were told at the outset to remember the words or whether learning was incidental.

 

LOP and rehearsal

The multi-store model claimed that rehearsal of any type could benefit LTM. However, Craik & Lockhart suggest that there are two types of rehearsal: maintenance (merely repeating what has already been analysed) and elaborative (involving deeper analysis of the material). Only the latter leads to better remembering. Craik & Watkin’s (1973) experiment supports this..

Elaboration and distinctiveness

Craik and Tulving have suggested that is not just depth of processing that affects storage but also elaboration (how much processing of any kind) and distinctiveness (how unusual the processing). If participants are asked to decide on whether a word fits into a complex or a simple sentence, incidental learning is better in the complex condition, which is the more elaborative condition (Craik & Tulving, 1975).

 

Evaluation of LOP

  • Emphasises the interdependence of perception, attention and memory rather than seeing memory as series of separate processing stages (as in MSM).
  • Mainly descriptive rather than explanatory. It doesn’t really explain why deeper processing leads to better recall. In other words, why should something that is deeply processed be stored more permanently in LTM?
  • It is difficult to obtain an independent measure of depth of processing. It is hard to decide whether a task involves deep or shallow processing. Craik & Tulving assumed that semantic processing was ‘deeper’ than visual processing but their only real evidence for this was that more words were remembered in the semantic condition. This is a circular argument.
  • Some studies contradict the model. For example, Morris et al (1977) showed that information processed for sound (rhyming) was better remembered than information processed for meaning (semantic) if rhyming was more relevant to the task. (In their study participants were asked to perform a rhyming recognition task.)

 

Back to AS Psychology FrontPage

 

Top

 

Comments (0)

You don't have permission to comment on this page.