What is multimodality?Multimodal texts are defined as texts which communicate their message using more than one semiotic mode, or channel of communication. Examples are magazine articles which use words and pictures, or websites which contain audio clips alongside the words, or film which uses words, music, sound effects and moving images. As soon as you start to take this idea seriously, you realise that, in a sense, all human communication is intrinsically multimodal. We rarely read, write, receive or send messages to one other in a single mode. In spoken language, for example, words are often accompanied by facial gestures, hand movements and so on. This paralanguage is communicative, and is hard to separate from words as we engage in the process of interpretation. An email message may be thought of as written text, but it is accessed via a series of visual icons on a computer, is read in the context of a website or desktop screen, and may contain iconic representations of the sender’s mood such as emoticons (or ‘smilies’), or unusual punctuation added by the sender for emphasis, etc. Email communication is often quite ‘speech-like’, too, so can be said to contain elements of spoken language (more on this later).
Even a piece of solid written text with no pictures can be said to convey messages from visual modes. We may be influenced by the typeface of the text: it may seem formal or informal, childlike (such as large lower case letters), or carry other connotations which support or undermine the apparent message of the words. The layout of the page can also be interpreted as conveying meaning: think about your impression of a text set out in columns like a newspaper article, or double spaced like a first draft of a report, or densely packed like a dictionary entry. Advertisements exploit this extra layer of meaning as a matter of routine. Our knowledge and experience of other texts is brought to bear and colours what we take from any new text, even if this process is not a conscious one. Some of the principle communicative components of text are:
In this section you will learn about and try out types of analysis which aim to integrate visual and physical aspects of communication with analysis of spoken and written language. Multimodal approaches to the study of different forms of communication – the visual aspects of communication (in art/cultural studies), the physical (non-verbal communication in psychology) for example – have a long history of course. However, the study of communication within the tradition of Western linguistics has tended to focus predominantly on verbal aspects of communication. A call has come in recent times to integrate visual and physical aspects of communication into analyses of spoken and written language. This arises out of two principal concerns:
Activity 8 Communicating via websitesAllow up to 1 hour
Take a moment to visit The Open University homepage:
The Open University[Tip: hold Ctrl and click a link to open it in a new tab. (Hide tip)]
As websites go, it’s fairly straightforward and contains only two modes of communication, verbal and visual. But these modes, even on simple websites, communicate in many ways: through layout, colour, typeface, for example. What do the different elements of this website suggest to you?
Reveal comment
Computers, then, are rapidly adding new multimodal texts to our daily communicative practices. In some communities, though, multimodal communication is routine and has existed for centuries. The next reading introduces you to an example of this from Brazil.
Traditions of multimodal practicesSection 3 has already introduced the idea that wider social processes, including cultural practices, shape the ways we use language and create meaning. However, the introduction of the technology of writing interacts with traditional cultural practices and can be generative or transformative. Literacy can transform practices from ‘vision’ to paper: this new literacy is then adapted into its own multimodal cultural identity.
Swanwick (2002) highlighted a number of issues relevant to both deaf and hearing children, as they learned to write in English. The deaf children she studied had varying degrees of deafness and varying proficiency in British Sign Language (BSL). Some had hearing parents and siblings; some did not.
Swanwick pointed out that deaf children learning to write English have to shift between, and make sense of, three modes of communication simultaneously: sign language – visual; English – spoken; English – written. Monolingual hearing children only have to cope with two. Some of these deaf children may have a visual-gestural code as their ‘inner speech’, thus making it harder for them to translate into written English than their hearing or partially deaf counterparts, whose inner speech is spoken English. Swanwick noted that differences between the two languages, such as the importance of facial gesture and word order, make the literacy development of deaf children very different from the biliteracy development of hearing children. Some meanings in BSL, moreover, are not amenable to direct translation.
Swanwick concluded that the children used a variety of strategies to write their stories in English, and suggested that those with more developed speaking skills appeared to find the writing task easier, as they can think in English rather than only in BSL.
Widening interest in multimodal textsAs an academic area of study, multimodality has attracted increasing interest over the last decade or so. This interest stems from a number of factors, including:
As well as these more ‘traditional’ texts, however, computers have rapidly increased the extent and range of multimodal communication we encounter. Unlike early computers which required written commands to be entered, all modern computer systems use desktop screens with visual icons that users click to start programs. Programs themselves rely on the use of button bars (icons) to perform most functions, and if we use CD-ROMs or the internet we are immediately immersed in multimodality – sounds, images, video clips, radio programmes, music.
Understanding multimodal textsBeing surrounded by such texts, it is important that we understand how meaning is derived from individual elements in a text, such as words, pictures and sounds, and how the meanings of these elements interact to form a whole.
Many researchers believe that such an understanding of multimodal texts is so important that it should be a central part of literacy pedagogy. The New London Group (or Multiliteracies Project), whom we briefly mentioned in section 2.3, first published ‘A pedagogy of multiliteracies: designing social futures’ in 1996. It sets out a pedagogy for ‘multiliteracies’ aimed at broadening traditional conceptions of literacy to encompass multimodal communication. The authors give their reasons for advocating a broad definition of literacy as follows:
First, we want to extend the idea and scope of literacy pedagogy to account for the context of our culturally and linguistically diverse and increasingly globalised societies, for the multifarious cultures that interrelate and the plurality of texts that circulate. Second, we argue that literacy pedagogy now must account for the burgeoning variety of text forms associated with information and multimedia technologies. This includes understanding and competent control of representational forms that are becoming increasingly significant in the overall communications environment, such as visual images and their relationship to the written word – for instance, visual design in desktop publishing or the interface of visual and linguistic meaning in multimedia. Indeed, this second point relates closely back to the first; the proliferation of communications channels and media supports and extends cultural and subcultural diversity. As soon as our sights are set on the objective of creating the learning conditions for full social participation, the issue of differences become critically important. How do we ensure that differences of culture, language, and gender are not barriers to educational success? And what are the implications of these differences for literacy pedagogy?
New London Group, 1996, p. 61
The authors argue that literacy pedagogy must take account of the different literacy demands made on students in an increasingly culturally diverse world, where future employment depends less on manual skills and more on communication skills. The purpose of education, they argue, is to equip students with the skills to participate fully in social and economic life.
These are broad and ambitious aims. Small studies into how children begin to engage with literacy support them, however. Millard and Marsh (2001) looked into the relationship between children’s visual literacy skills and emergent writing, and teacher responses to their pupils’ drawings. They found that drawings, although often a vital part of the child’s communication of a story and its significance, were largely ignored or seen as an unimportant part of the transition into ‘proper writing’. Millard and Marsh state that, increasingly, pressures on teachers to achieve certain standards in writing mean that an important part of children’s literacy development is being overlooked. The effect on boys, in particular, was to engender lower motivation and achievement (Millard and Marsh, 2001, p. 55).
Coles and Hall (2001) consider how contemporary texts often require different ways of reading than do conventional books, with their linear and ordered reading paths – from left to right in English, for example. They looked at some modern children’s books which break down these traditional pathways and subvert our expectations – by having characters break out of the story to speak to the reading child, or by having the Big Bad Wolf defend himself in an alternative version of the Three Little Pigs fairytale, or by weaving together different narratives which require the reader to make choices to proceed with the story. Coles and Hall describe these as displaying the fun, parody and irony of postmodernism:
The search for ‘true’ gives way to playfulness where coherence is formed by constantly unfolding meanings, and expressed through choices the reader makes.
Coles and Hall, 2001, p. 112
The term ‘postmodernism’ is sometimes used interchangedly with poststructuralism which you met in section 3, but is used by Coles and Hall to convey a perceived sense of the precariousness of meaning-making in texts (see Graddol, 1994, pp. 17–19).
Children also regularly interact with websites and periodicals, which make similar demands on them. Because reading in these texts is non-linear, and readers have to actively engage with them rather than passively consume them, the authors argue that there are implications for how reading is approached in school:
[T]he reading curriculum, and associated assessment criteria, still promote a linear view of reading, and rarely promote the kinds of literacy which are required in the workplace and in the home.
Coles and Hall, 2001, p. 112
Understanding how and why texts are producedThe forms that texts take are often closely related to their means of production, and the intentions of the producers, which are shaped by political and commercial forces, or sometimes simply by certain views of the world (ideologies). It is important to be aware of these forces and ask questions of texts, such as who produced it and why? What is its purpose? What views does it portray or reject? This is not to argue that texts are intrinsically sinister; rather that authors/producers have a purpose which is not always apparent, and which may suppress alternatives or guide our interpretation of the text. This ideological approach (involving often quite detailed critique of texts) has been an important one over the last three decades, and has been taken up by social scientists and linguists in particular.
The notion of ‘design’A key concept in the Multiliteracies Project and within writings on multimodality is that of ‘design’, a term increasingly used by those involved in research into multimodal texts, such as Kress and van Leeuwen (2001). This use of the term differs from more usual and commonsensical notions of design – such as the use of space or layout in ‘interior design’ – although it encompasses these meanings as well. The term ‘design’ in multimodal research signals a shift away from a focus on verbal language alone, and a move forward from a focus on critique and ideological stances in texts. Design, ‘the organisation of what is to be articulated into a blueprint for production’ (Kress and van Leeuwen, 2001, p. 50), implies that we are all increasingly able to have greater control over the texts we produce, and have a wider range of semiotic modes to select from when we communicate. The term is still used in much of the literature, however, interchangeably with ‘design’ in its more commonsensical way. We will return to the concept of ‘design’ at several points in this section.
This more dual notion of ‘design’ mirrors in some ways the dual meanings of discourse – both concrete and abstract – discussed in earlier sections. Both meanings of ‘design’, and both meanings of ‘discourse’, need to be considered in multimodal texts. So far in this unit we have discussed discourses in terms of the verbal mode of communication. It is also possible to identify them in operation in the visual. For example, Kress and van Leeuwen (2001) analyse photographs of children’s bedrooms taken from House Beautiful magazine, with the accompanying text. If we focus on the design of the bedroom in the everyday, more concrete, sense of the term, we might talk about descriptive details: colours, where things are, what’s there. This descriptive detail is important in multimodal research and analysis. But so too is the more abstract notion of design: constructions of childhood, family, etc.
Kress and van Leeuwen show how the bedroom furniture, use of colour, and layout impose or imply certain types of activities in the room (a child’s sofa is for reading, pegs are set at a low height for children to hang up their own clothes, and so on). The photographs therefore encode discourses about childhood, homes, families and gender. The design presents as normal and conventional certain idealised Western models of children’s behaviour: they will play or read quietly in such spaces, away from adults who have better things to do, and they will tidy up after themselves. Kress and van Leeuwen point out that not all cultures separate children from adults in these ways, nor do they design spaces for these specific activities. They also note that the design of the bedrooms is highly gendered, and link this to conventionalised notions of appropriate behaviour for boys and girls: girls read, sing, dance and dress up, whereas boys play with trains and toys. A desk is also shown.
This children’s bedroom is clearly a pedagogical tool, a medium for communicating to the child, in the language of interior design, the qualities (already complex: ‘bold’, yet also ‘sunny’ and ‘cheerful’), the pleasures (‘singing and dancing with your friends’), the duties (orderly management of possessions and, eventually, ‘work’), and the kind of future her parents desire for her.
Kress and van Leeuwen, 2001, p. 15
Multimodal texts can guide our reading and interaction with them in other ways. Researchers have noted, for example, that encyclopaedias produced on CD-ROMs can be quite restrictive in terms of how they can be used, what information is available, and how people and events are represented. Luke (2000) sees a major challenge for education in mediating electronic texts:
Literacy requirements have changed and will continue to change as new technologies come on the marketplace and quickly blend into our everyday private and work lives. And unless educators take a lead in developing appropriate pedagogies for these new electronic media and forms of communication, corporate experts will be the ones to determine how people will learn, what they learn, and what constitutes literacy. For instance, a quick look through any of today’s most popular CD-ROM encyclopaedias (e.g., Microsoft’s Encarta) shows how limited entries on, for example, ‘Australia’ or ‘Aborigines’ are; how ideas are connected by lateral links and pathways which exclude other knowledge options; and how the software in fact ‘teaches’ the user-learner certain cognitive mapping strategies. Many of these best-selling American-authored encyclopaedias are in use in Australian schools and households. But even Australian-authored educational CD-ROMs reproduce the same old tired narratives on, for instance, bushrangers framed in mythologies of male heroes, and narratives of colonialism framed in mythologies of settlement instead of invasion. The point is that today’s corporate software designers can easily become the literacy and pedagogy experts of tomorrow. This is not to say that many educational products on the market today are pedagogically unsound or lack innovative teaching-learning methods. But it is to suggest that educators need to become familiar with the many issues at stake in the ‘information revolution’ so that we know how and where we must intervene with positive and critical strategies for Multiliteracies teaching, and how to make the best and judicious use of the many multimedia resources available.
Luke, 2000, p. 71
Zammit and Callow (1999) analysed in detail screens from two educational CD-ROMs (The ANIMALS!, based on San Diego Zoo, and the Encarta encyclopaedia). They compared the introductory screens (splash screens) and a page of information from each CD about koala bears. The authors were interested in the ideological positions set up within the CD-ROM texts, in how information was presented as factual or questionable, in implicit or explicit hierarchical structures, and in how the design encourages particular ways of navigation through the text. The ANIMALS!, for example, uses predominantly visual icons, with many symbolic abstract images, and discourages individual keyword searching – this CD-ROM prefers visitors to go on a pre-defined tour of the zoo. Encarta, on the other hand, uses both verbal text and visual icons and encourages topic-specific searching and navigation. Zammit and Callow demonstrate the complexity of reading positions required by CD-ROMs, even on a single screen. They advocate providing students with critical evaluative tools for use with such multimodal texts.
Van Leeuwen (2000) looks at a different aspect of educational databases on CD-ROM. He is interested in how visual and verbal information is presented, and what sorts of information are presented in each mode. He analysed a Microsoft CD-ROM, Dangerous Creatures, which uses a number of ‘guides’ who lead users through the database. Van Leeuwen notes that the visual mode is used in a similar way throughout the tours, whereas the verbal text differs considerably; and he questions whether the various guides leading users through the database represent different points of view on events. Overall, he suggests that the apparently different viewpoints are actually packaged consistently – while they may appear heterogeneous on the surface, there is an underlying conformity. Van Leeuwen links this to practices in other spheres of life, such as radio broadcasts which, while admitting wide variations of accent and musical style in their programmes, all tend to follow a similar overall format. Children using this CD-ROM may follow different routes, but they are nonetheless learning social and textual patterns which are remarkably conformist.
Activity 9 Evaluating CD-ROMsAllow up to 2 hours
If you use CD-ROMs for teaching, or have one at home, or can use one in a library, look to see whether you can apply van Leeuwen’s points to them. Look particularly for:
It is claimed that technology has played a hugely facilitating role in democratising the processes of text production, for those who have access to it. Desktop publishing and word processing programs certainly make it easy for users to change typefaces, layout, emphasis; they can add images (often supplied pre-drawn); they can digitise photographs and change almost anything about them; they can send audio clips and video clips, and so on. Web authoring programs are also widely available. A vast number of non-professionals have thus been handed the tools of individualised text production. As we will see in the next part, however, even larger numbers of people have no such tools.
Our increasing engagement with multimodal texts in more and more areas of our lives, as well as the need to create them ourselves, comes largely from the widespread use of technology. Before we turn to look at that in more detail, however, we outline some more general ways in which technology influences language use and linguistic forms.
Information technology and language: Access and participationJust as not all multimodality derives from technology, not all technology produces multimodal texts. This part provides a brief outline of some of the ways in which developments in information and communications technology are linked to changes in language, as well as how we communicate with each other via technology. We have insufficient space here to discuss in detail the implications – political, social, commercial – of such developments, but you may wish to follow these up yourself. Some connections between language and technology are pretty banal and unproblematic; others are profoundly political or financial in nature, and have to do with the globalising business practices of large corporations, and concomitant effects on smaller local communities, or the status of minority languages. It is clear, for example, that the availability of information and communications technology is not evenly spread around the world – there are vast inequalities in terms of access and use. Accurate statistics on internet use are difficult to find, but it is possible to find some broad indicators such as number of users worldwide, and the languages being used by them. The website Nua Online, for example, makes what it calls an ‘educated guess’ as to numbers of people online, based on results of a range of surveys. The figures for February 2002 are shown in Table 4.
Table 4 Numbers of people onlineWorld total544.2 millionAfrica4.15 millionAsia / Pacific157.49 millionEurope171.35 millionMiddle East4.65 millionCanada and USA181.23 millionLatin America25.33 million
Nua Online, 2002
Others give figures in terms of percentage of population, which is more useful for drawing conclusions about comparative levels of access, although still not very precise about countries. For example, Singapore’s high number of users is not obvious from the United Nations Development Programme’s ‘Annual Report’ (see Table 5).
Table 5 Internet users by regionPercentage of populationUnited States54.3%High Income OECD (excluding US)28.2%Eastern Europe and CIS3.9%Latin America and the Caribbean3.2%East Asia and the Pacific2.3%Arab States0.6%Sub-Saharan Africa0.4%South Asia0.4%
United Nations Development Programme, 2001
According to Global Reach, an internet site containing information about e-commerce and demographic data, the online language populations in December 2001 were as shown in Figure 1.
Global Reach, 2001
Figure 1 Online language populations
The data in Figure 1 is problematic, of course: it shows what the site calls ‘native speakers’ of each language, but doesn’t show how many people are speakers of more than one language; nor does it show actual internet ‘traffic’, that is, the amount of communication actually taking place in each language. However, it is clear that some developing countries are more or less excluded from the ‘technological revolution’, as Rassool points out:
[T]he cultural and economic heritage of colonialism and the reinforcement of inequalities in postcolonial contexts, have contributed to the fact that many developing countries, especially in Africa, still lack the necessary infrastructure to support the development of an adequate industrial base, let alone having the capacity to enter the technological development paradigm as equal competitors in the global market place.
Rassool, 1999, p.145
Even where technology is available, it does not necessarily bring an appropriate model of communications to countries with different cultural and traditional practices. Many countries still struggle to provide basic education, with even chalk and slates being in short supply in many areas (see Rassool, 1999, for more about development and education).
In the developed world, however, the introduction of new information technology always brings renewed claims that it is revolutionising the ways we communicate with each other. New media of communication have always brought with them new linguistic forms, and have required us to adapt established practices in order to use them. Often this is because of the limitations of new technology (think of the short, pared-down style of writing used on the early telegraph and then telex machines, or the many symbols and abbreviations used now in text messaging on mobile phones). There are also some less obvious, but interesting, effects of technology on language itself, or on the choice of which language to use.
Even a piece of solid written text with no pictures can be said to convey messages from visual modes. We may be influenced by the typeface of the text: it may seem formal or informal, childlike (such as large lower case letters), or carry other connotations which support or undermine the apparent message of the words. The layout of the page can also be interpreted as conveying meaning: think about your impression of a text set out in columns like a newspaper article, or double spaced like a first draft of a report, or densely packed like a dictionary entry. Advertisements exploit this extra layer of meaning as a matter of routine. Our knowledge and experience of other texts is brought to bear and colours what we take from any new text, even if this process is not a conscious one. Some of the principle communicative components of text are:
- written or spoken language
- intonation
- images (photographs, diagrams, drawings), and aspects of images such as colour, sharpness of focus, spatial composition, etc. Also other visuals such as logos, corporate letterheads, shop or road signs
- gestures
- facial movements
- action (movement in film, for example).
In this section you will learn about and try out types of analysis which aim to integrate visual and physical aspects of communication with analysis of spoken and written language. Multimodal approaches to the study of different forms of communication – the visual aspects of communication (in art/cultural studies), the physical (non-verbal communication in psychology) for example – have a long history of course. However, the study of communication within the tradition of Western linguistics has tended to focus predominantly on verbal aspects of communication. A call has come in recent times to integrate visual and physical aspects of communication into analyses of spoken and written language. This arises out of two principal concerns:
- to acknowledge that verbal language always takes place alongside a whole array of other representational (semiotic) resources (the word ‘semiotic’ or ‘semiosis’, meaning ‘the meaning of signs’, is often used in these approaches to signal an interest in language as well as other sign systems)
- that global communication practices at the beginning of the twenty-first century, notably exemplified in internet usage, are increasingly more obviously multimodal, displacing the verbal as the central mode of communication.
Activity 8 Communicating via websitesAllow up to 1 hour
Take a moment to visit The Open University homepage:
The Open University[Tip: hold Ctrl and click a link to open it in a new tab. (Hide tip)]
As websites go, it’s fairly straightforward and contains only two modes of communication, verbal and visual. But these modes, even on simple websites, communicate in many ways: through layout, colour, typeface, for example. What do the different elements of this website suggest to you?
Reveal comment
Computers, then, are rapidly adding new multimodal texts to our daily communicative practices. In some communities, though, multimodal communication is routine and has existed for centuries. The next reading introduces you to an example of this from Brazil.
Traditions of multimodal practicesSection 3 has already introduced the idea that wider social processes, including cultural practices, shape the ways we use language and create meaning. However, the introduction of the technology of writing interacts with traditional cultural practices and can be generative or transformative. Literacy can transform practices from ‘vision’ to paper: this new literacy is then adapted into its own multimodal cultural identity.
Swanwick (2002) highlighted a number of issues relevant to both deaf and hearing children, as they learned to write in English. The deaf children she studied had varying degrees of deafness and varying proficiency in British Sign Language (BSL). Some had hearing parents and siblings; some did not.
Swanwick pointed out that deaf children learning to write English have to shift between, and make sense of, three modes of communication simultaneously: sign language – visual; English – spoken; English – written. Monolingual hearing children only have to cope with two. Some of these deaf children may have a visual-gestural code as their ‘inner speech’, thus making it harder for them to translate into written English than their hearing or partially deaf counterparts, whose inner speech is spoken English. Swanwick noted that differences between the two languages, such as the importance of facial gesture and word order, make the literacy development of deaf children very different from the biliteracy development of hearing children. Some meanings in BSL, moreover, are not amenable to direct translation.
Swanwick concluded that the children used a variety of strategies to write their stories in English, and suggested that those with more developed speaking skills appeared to find the writing task easier, as they can think in English rather than only in BSL.
Widening interest in multimodal textsAs an academic area of study, multimodality has attracted increasing interest over the last decade or so. This interest stems from a number of factors, including:
- the number and type of multimodal texts has increased dramatically
- we need to understand and be ‘literate’ in reading multimodal texts
- we need to understand how and why such texts are produced.
As well as these more ‘traditional’ texts, however, computers have rapidly increased the extent and range of multimodal communication we encounter. Unlike early computers which required written commands to be entered, all modern computer systems use desktop screens with visual icons that users click to start programs. Programs themselves rely on the use of button bars (icons) to perform most functions, and if we use CD-ROMs or the internet we are immediately immersed in multimodality – sounds, images, video clips, radio programmes, music.
Understanding multimodal textsBeing surrounded by such texts, it is important that we understand how meaning is derived from individual elements in a text, such as words, pictures and sounds, and how the meanings of these elements interact to form a whole.
Many researchers believe that such an understanding of multimodal texts is so important that it should be a central part of literacy pedagogy. The New London Group (or Multiliteracies Project), whom we briefly mentioned in section 2.3, first published ‘A pedagogy of multiliteracies: designing social futures’ in 1996. It sets out a pedagogy for ‘multiliteracies’ aimed at broadening traditional conceptions of literacy to encompass multimodal communication. The authors give their reasons for advocating a broad definition of literacy as follows:
First, we want to extend the idea and scope of literacy pedagogy to account for the context of our culturally and linguistically diverse and increasingly globalised societies, for the multifarious cultures that interrelate and the plurality of texts that circulate. Second, we argue that literacy pedagogy now must account for the burgeoning variety of text forms associated with information and multimedia technologies. This includes understanding and competent control of representational forms that are becoming increasingly significant in the overall communications environment, such as visual images and their relationship to the written word – for instance, visual design in desktop publishing or the interface of visual and linguistic meaning in multimedia. Indeed, this second point relates closely back to the first; the proliferation of communications channels and media supports and extends cultural and subcultural diversity. As soon as our sights are set on the objective of creating the learning conditions for full social participation, the issue of differences become critically important. How do we ensure that differences of culture, language, and gender are not barriers to educational success? And what are the implications of these differences for literacy pedagogy?
New London Group, 1996, p. 61
The authors argue that literacy pedagogy must take account of the different literacy demands made on students in an increasingly culturally diverse world, where future employment depends less on manual skills and more on communication skills. The purpose of education, they argue, is to equip students with the skills to participate fully in social and economic life.
These are broad and ambitious aims. Small studies into how children begin to engage with literacy support them, however. Millard and Marsh (2001) looked into the relationship between children’s visual literacy skills and emergent writing, and teacher responses to their pupils’ drawings. They found that drawings, although often a vital part of the child’s communication of a story and its significance, were largely ignored or seen as an unimportant part of the transition into ‘proper writing’. Millard and Marsh state that, increasingly, pressures on teachers to achieve certain standards in writing mean that an important part of children’s literacy development is being overlooked. The effect on boys, in particular, was to engender lower motivation and achievement (Millard and Marsh, 2001, p. 55).
Coles and Hall (2001) consider how contemporary texts often require different ways of reading than do conventional books, with their linear and ordered reading paths – from left to right in English, for example. They looked at some modern children’s books which break down these traditional pathways and subvert our expectations – by having characters break out of the story to speak to the reading child, or by having the Big Bad Wolf defend himself in an alternative version of the Three Little Pigs fairytale, or by weaving together different narratives which require the reader to make choices to proceed with the story. Coles and Hall describe these as displaying the fun, parody and irony of postmodernism:
The search for ‘true’ gives way to playfulness where coherence is formed by constantly unfolding meanings, and expressed through choices the reader makes.
Coles and Hall, 2001, p. 112
The term ‘postmodernism’ is sometimes used interchangedly with poststructuralism which you met in section 3, but is used by Coles and Hall to convey a perceived sense of the precariousness of meaning-making in texts (see Graddol, 1994, pp. 17–19).
Children also regularly interact with websites and periodicals, which make similar demands on them. Because reading in these texts is non-linear, and readers have to actively engage with them rather than passively consume them, the authors argue that there are implications for how reading is approached in school:
[T]he reading curriculum, and associated assessment criteria, still promote a linear view of reading, and rarely promote the kinds of literacy which are required in the workplace and in the home.
Coles and Hall, 2001, p. 112
Understanding how and why texts are producedThe forms that texts take are often closely related to their means of production, and the intentions of the producers, which are shaped by political and commercial forces, or sometimes simply by certain views of the world (ideologies). It is important to be aware of these forces and ask questions of texts, such as who produced it and why? What is its purpose? What views does it portray or reject? This is not to argue that texts are intrinsically sinister; rather that authors/producers have a purpose which is not always apparent, and which may suppress alternatives or guide our interpretation of the text. This ideological approach (involving often quite detailed critique of texts) has been an important one over the last three decades, and has been taken up by social scientists and linguists in particular.
The notion of ‘design’A key concept in the Multiliteracies Project and within writings on multimodality is that of ‘design’, a term increasingly used by those involved in research into multimodal texts, such as Kress and van Leeuwen (2001). This use of the term differs from more usual and commonsensical notions of design – such as the use of space or layout in ‘interior design’ – although it encompasses these meanings as well. The term ‘design’ in multimodal research signals a shift away from a focus on verbal language alone, and a move forward from a focus on critique and ideological stances in texts. Design, ‘the organisation of what is to be articulated into a blueprint for production’ (Kress and van Leeuwen, 2001, p. 50), implies that we are all increasingly able to have greater control over the texts we produce, and have a wider range of semiotic modes to select from when we communicate. The term is still used in much of the literature, however, interchangeably with ‘design’ in its more commonsensical way. We will return to the concept of ‘design’ at several points in this section.
This more dual notion of ‘design’ mirrors in some ways the dual meanings of discourse – both concrete and abstract – discussed in earlier sections. Both meanings of ‘design’, and both meanings of ‘discourse’, need to be considered in multimodal texts. So far in this unit we have discussed discourses in terms of the verbal mode of communication. It is also possible to identify them in operation in the visual. For example, Kress and van Leeuwen (2001) analyse photographs of children’s bedrooms taken from House Beautiful magazine, with the accompanying text. If we focus on the design of the bedroom in the everyday, more concrete, sense of the term, we might talk about descriptive details: colours, where things are, what’s there. This descriptive detail is important in multimodal research and analysis. But so too is the more abstract notion of design: constructions of childhood, family, etc.
Kress and van Leeuwen show how the bedroom furniture, use of colour, and layout impose or imply certain types of activities in the room (a child’s sofa is for reading, pegs are set at a low height for children to hang up their own clothes, and so on). The photographs therefore encode discourses about childhood, homes, families and gender. The design presents as normal and conventional certain idealised Western models of children’s behaviour: they will play or read quietly in such spaces, away from adults who have better things to do, and they will tidy up after themselves. Kress and van Leeuwen point out that not all cultures separate children from adults in these ways, nor do they design spaces for these specific activities. They also note that the design of the bedrooms is highly gendered, and link this to conventionalised notions of appropriate behaviour for boys and girls: girls read, sing, dance and dress up, whereas boys play with trains and toys. A desk is also shown.
This children’s bedroom is clearly a pedagogical tool, a medium for communicating to the child, in the language of interior design, the qualities (already complex: ‘bold’, yet also ‘sunny’ and ‘cheerful’), the pleasures (‘singing and dancing with your friends’), the duties (orderly management of possessions and, eventually, ‘work’), and the kind of future her parents desire for her.
Kress and van Leeuwen, 2001, p. 15
Multimodal texts can guide our reading and interaction with them in other ways. Researchers have noted, for example, that encyclopaedias produced on CD-ROMs can be quite restrictive in terms of how they can be used, what information is available, and how people and events are represented. Luke (2000) sees a major challenge for education in mediating electronic texts:
Literacy requirements have changed and will continue to change as new technologies come on the marketplace and quickly blend into our everyday private and work lives. And unless educators take a lead in developing appropriate pedagogies for these new electronic media and forms of communication, corporate experts will be the ones to determine how people will learn, what they learn, and what constitutes literacy. For instance, a quick look through any of today’s most popular CD-ROM encyclopaedias (e.g., Microsoft’s Encarta) shows how limited entries on, for example, ‘Australia’ or ‘Aborigines’ are; how ideas are connected by lateral links and pathways which exclude other knowledge options; and how the software in fact ‘teaches’ the user-learner certain cognitive mapping strategies. Many of these best-selling American-authored encyclopaedias are in use in Australian schools and households. But even Australian-authored educational CD-ROMs reproduce the same old tired narratives on, for instance, bushrangers framed in mythologies of male heroes, and narratives of colonialism framed in mythologies of settlement instead of invasion. The point is that today’s corporate software designers can easily become the literacy and pedagogy experts of tomorrow. This is not to say that many educational products on the market today are pedagogically unsound or lack innovative teaching-learning methods. But it is to suggest that educators need to become familiar with the many issues at stake in the ‘information revolution’ so that we know how and where we must intervene with positive and critical strategies for Multiliteracies teaching, and how to make the best and judicious use of the many multimedia resources available.
Luke, 2000, p. 71
Zammit and Callow (1999) analysed in detail screens from two educational CD-ROMs (The ANIMALS!, based on San Diego Zoo, and the Encarta encyclopaedia). They compared the introductory screens (splash screens) and a page of information from each CD about koala bears. The authors were interested in the ideological positions set up within the CD-ROM texts, in how information was presented as factual or questionable, in implicit or explicit hierarchical structures, and in how the design encourages particular ways of navigation through the text. The ANIMALS!, for example, uses predominantly visual icons, with many symbolic abstract images, and discourages individual keyword searching – this CD-ROM prefers visitors to go on a pre-defined tour of the zoo. Encarta, on the other hand, uses both verbal text and visual icons and encourages topic-specific searching and navigation. Zammit and Callow demonstrate the complexity of reading positions required by CD-ROMs, even on a single screen. They advocate providing students with critical evaluative tools for use with such multimodal texts.
Van Leeuwen (2000) looks at a different aspect of educational databases on CD-ROM. He is interested in how visual and verbal information is presented, and what sorts of information are presented in each mode. He analysed a Microsoft CD-ROM, Dangerous Creatures, which uses a number of ‘guides’ who lead users through the database. Van Leeuwen notes that the visual mode is used in a similar way throughout the tours, whereas the verbal text differs considerably; and he questions whether the various guides leading users through the database represent different points of view on events. Overall, he suggests that the apparently different viewpoints are actually packaged consistently – while they may appear heterogeneous on the surface, there is an underlying conformity. Van Leeuwen links this to practices in other spheres of life, such as radio broadcasts which, while admitting wide variations of accent and musical style in their programmes, all tend to follow a similar overall format. Children using this CD-ROM may follow different routes, but they are nonetheless learning social and textual patterns which are remarkably conformist.
Activity 9 Evaluating CD-ROMsAllow up to 2 hours
If you use CD-ROMs for teaching, or have one at home, or can use one in a library, look to see whether you can apply van Leeuwen’s points to them. Look particularly for:
- Any ‘division of labour’ between visual and verbal modes: what sort of information is presented in each?
- How are you encouraged to engage with the narrative, and how restricted are you in terms of following your path(s) through the material?
- What conclusions are you able to draw from this?
It is claimed that technology has played a hugely facilitating role in democratising the processes of text production, for those who have access to it. Desktop publishing and word processing programs certainly make it easy for users to change typefaces, layout, emphasis; they can add images (often supplied pre-drawn); they can digitise photographs and change almost anything about them; they can send audio clips and video clips, and so on. Web authoring programs are also widely available. A vast number of non-professionals have thus been handed the tools of individualised text production. As we will see in the next part, however, even larger numbers of people have no such tools.
Our increasing engagement with multimodal texts in more and more areas of our lives, as well as the need to create them ourselves, comes largely from the widespread use of technology. Before we turn to look at that in more detail, however, we outline some more general ways in which technology influences language use and linguistic forms.
Information technology and language: Access and participationJust as not all multimodality derives from technology, not all technology produces multimodal texts. This part provides a brief outline of some of the ways in which developments in information and communications technology are linked to changes in language, as well as how we communicate with each other via technology. We have insufficient space here to discuss in detail the implications – political, social, commercial – of such developments, but you may wish to follow these up yourself. Some connections between language and technology are pretty banal and unproblematic; others are profoundly political or financial in nature, and have to do with the globalising business practices of large corporations, and concomitant effects on smaller local communities, or the status of minority languages. It is clear, for example, that the availability of information and communications technology is not evenly spread around the world – there are vast inequalities in terms of access and use. Accurate statistics on internet use are difficult to find, but it is possible to find some broad indicators such as number of users worldwide, and the languages being used by them. The website Nua Online, for example, makes what it calls an ‘educated guess’ as to numbers of people online, based on results of a range of surveys. The figures for February 2002 are shown in Table 4.
Table 4 Numbers of people onlineWorld total544.2 millionAfrica4.15 millionAsia / Pacific157.49 millionEurope171.35 millionMiddle East4.65 millionCanada and USA181.23 millionLatin America25.33 million
Nua Online, 2002
Others give figures in terms of percentage of population, which is more useful for drawing conclusions about comparative levels of access, although still not very precise about countries. For example, Singapore’s high number of users is not obvious from the United Nations Development Programme’s ‘Annual Report’ (see Table 5).
Table 5 Internet users by regionPercentage of populationUnited States54.3%High Income OECD (excluding US)28.2%Eastern Europe and CIS3.9%Latin America and the Caribbean3.2%East Asia and the Pacific2.3%Arab States0.6%Sub-Saharan Africa0.4%South Asia0.4%
United Nations Development Programme, 2001
According to Global Reach, an internet site containing information about e-commerce and demographic data, the online language populations in December 2001 were as shown in Figure 1.
Global Reach, 2001
Figure 1 Online language populations
The data in Figure 1 is problematic, of course: it shows what the site calls ‘native speakers’ of each language, but doesn’t show how many people are speakers of more than one language; nor does it show actual internet ‘traffic’, that is, the amount of communication actually taking place in each language. However, it is clear that some developing countries are more or less excluded from the ‘technological revolution’, as Rassool points out:
[T]he cultural and economic heritage of colonialism and the reinforcement of inequalities in postcolonial contexts, have contributed to the fact that many developing countries, especially in Africa, still lack the necessary infrastructure to support the development of an adequate industrial base, let alone having the capacity to enter the technological development paradigm as equal competitors in the global market place.
Rassool, 1999, p.145
Even where technology is available, it does not necessarily bring an appropriate model of communications to countries with different cultural and traditional practices. Many countries still struggle to provide basic education, with even chalk and slates being in short supply in many areas (see Rassool, 1999, for more about development and education).
In the developed world, however, the introduction of new information technology always brings renewed claims that it is revolutionising the ways we communicate with each other. New media of communication have always brought with them new linguistic forms, and have required us to adapt established practices in order to use them. Often this is because of the limitations of new technology (think of the short, pared-down style of writing used on the early telegraph and then telex machines, or the many symbols and abbreviations used now in text messaging on mobile phones). There are also some less obvious, but interesting, effects of technology on language itself, or on the choice of which language to use.