When we have done some analysis of the structure and likely language content of the typical texts certain learners need to write, we can design discrete-item, indirect tests of their ability to get the staging and linguistic content of the texts right before we test the ability to put it all together and write a full text.
If any of the above mystifies, try:
Assessing writing skills
- It is often assumed that the best way to test writing ability is to get the learners to write. That may seem obvious but, particularly at lower levels, being asked to write a whole, properly staged and formed text from scratch is a daunting and challenging task which may demotivate and depress our learners. The other issue is that reliably evaluating our learners’ ability with a malformed, disorganised and inaccurately written text as the only data source is almost impossible.
- A less forbidding and more reliable and practical approach is often to employ indirect tests which can be used to assess the underlying skills individually.
For example, separately testing the learners’ ability to use conjunction, sequencers, pronomial referencing and so on can provide some data about how well (if at all) they may be able to deploy the items in written texts. Equally, we can separate out some of the writing skills which go into making up the holistic skill of writing and test these subskills separately.
Example 1: If one of the target text types our learners will need to produce is a discussion of two sides of an issue (a skill often required in academic settings), then it makes some sense discretely to test the ability to use conjuncts, adjuncts and disjuncts (such as, on the other hand, seen from this angle, equally, however, making matters worse, whereas etc.). The ability to soften a point of view with modal expressions (such as it could be argued, it may be assumed, there is some evidence to suggest etc.) is also a key skill, incidentally.
Example 2: If a key writing skill is to summarise a mass of data in clear prose (as is often required in occupational settings for the production of a summary report on research), then it makes sense to test the learners’ ability to identify key information, describe trends and make tentative predictions based on the identified trends.
The aims of the teaching programme
All assessment starts (or should start) from a consideration of the aims of instruction.
For example, if the (or one) aim of a language course is to enable the learners to do well in an IELTS academic writing examination then this will be very influential in terms of the types of assessment tasks we use and the way in which we measure performance. The backwash (or washback, if you prefer) from the examination format will almost inevitably have to be reflected in the task types we set.
If, on the other hand, our aim is to enable the learners to operate successfully in a work environment then we will set different kinds of assessment tasks and measure performance against different criteria. In this case, a priority will be to measure how accurately and with how much communicative success the learners can handle the specific register and functions required by their work context.
Finally, if we are aiming at enabling our learners to function adequately in an English-speaking environment (perhaps as an immigrant or temporary resident), then this, too, will fundamentally affect the tasks we set and the benchmarks against which we measure success. Here, for example, we might be dealing with form filling, work getting and accessing services.
These three factors are to do with ensuring reliability and validity. For more on those two concepts, see the guide to testing, assessment and evaluation. The rest of this guide assumes basic familiarity with the content of that guide.
Fulfilling all three criteria adequately requires a little care.
You Might Also Like