No mentions that who can be used for animals. And it isn’t only MW. I checked six learner’s dictionaries and none of them said this was acceptable, or an option. This isn’t a huge deal, necessarily, but it can lead to confusion if, say, this has been taught as a ‘rule’ and then students read graded readers where the ‘rule’ is broken without any consequence (search for horse who, fish who, or monkey who in the Lextutor graded reader corpus, for example). Of course, there are tons of sources where students might encounter who being used for animals.
This came up because a student of mine had written “I don’t want a dog who is so big” and a peer suggested it should be which. And that’s FINE. It can be which :-). Or that :-). But it can also be who :-D.
For teachers who like to do consciousness raising or language awareness activities, this kind of situation provides opportunities for discussions of things like what if you read/hear some language in real life that doesn’t seem to match what the dictionary/grammar guide says, analyzing lines of ‘controversial’ lexicogrammar patterns, or formulating ideas about why people choose to use who, or which, or maybe that in different circumstances (does it seem ok for certain animals but not others? is it special for pets? does it change the meaning/tone/nuance?).
Of course, the underlying ideas could be applied to other language questions, too. So, in general, the ‘corpus lesson’ here is that corpora can be used to explore alternatives to more conventional patterns and aid in developing greater language awareness. Corpus-use can be applied to not just learning frequent or common patterns of expression, but to expanding the ways in which learners are able to express themselves.
While talking about this with another teacher, it was suggested that maybe the learner’s dictionaries (and perhaps some other learner-oriented materials) don’t acknowledge who for animals as acceptable because it’s new (recent) and thus ‘non-standard’. But I have trouble seeing which of these would be considered ‘non-standard’ (in fact, I doubt that in many cases fluent English users would even notice this usage unless it were pointed out or they were looking for it). And it’s not really a recent thing, is it?:
In the WordSketch function, there is now a button that provides more context for the words co-occuring with the target word/lemma.
Automatic PoS tagging in the corpus sometimes results in errors, and this feature is meant to help with this problem. It doesn’t really prevent the errors, but it should help users make correct identifications despite tagging errors.
I use SkELL quite often, so I was glad to read that there has been an update to the example sentences that get shown. This was an occasional, irritating issue because the kinds of SkELL-assisted activities I usually do with my students are hampered by spelling mistakes and such. For them, the learners, the cleaner data should be beneficial.
“SKELL If you are a user of SKELL, you might have noticed a recent improvement in the quality of the example sentences. This is thanks to the deletion of sentences that contained spelling mistakes and hapax legomena. While both of these things can be of interest, it is better that the 40 example sentences of a word or phrase are as accurate as possible.
There are 10,370 instances of the word ‘dolphin’, for example, in the full corpus. The algorithm that chooses the best 40 for learners now works with cleaner data.
It’s not a quick read, but the short version is that DDL works quite well in general; there are very encouraging results and several medium-to-large effect sizes were found. Going forward there needs to be more fine-grained research on for whom, for what, under what conditions, and for how long does DDL work well. They also make some important points about what information needs to be included in the future by researchers doing quantitative work on DDL.
@anthonyteacher has a great post at his site discussing the patterns “not to VERB” and “to not VERB”. He writes about his students’ reactions to the constructions, his own view, and some findings from Google N-grams and COCA. You should read his post in full.
I basically agree with everything he says, with one point he makes that I would like to extend a little bit. So I’d like to highlight this paragraph from his post, and especially the statement I put in red:
All of this data tells me several things. First, “to not” is on the rise, most likely due to the fact that the ability to separate an infinitive has become more accepted and “to not” has probably rolled in through a snowball effect. Second, the placement of “not” does not necessarily imply emphasis, as can be seen in the sentences above. Third, while my speech may make some of the older generations shake their first with anger, possibly telling me I am killing English, I can now reply confidently that my speech is the vanguard of an English where “not” is as placement-fluid as “they” is gender-fluid. My speech may be a speech that is likely to boldly go where few have gone before. Or to not boldly go, because language change is really unpredictable, and this is just a tiny thing.
I chose to highlight this section because I felt that sometimes my own choices regarding placement of “not” are definitely, if not necessarily, done for emphasis, but after thinking about it I don’t think it is a matter of placing emphasis per se. Rather, it is about restricting possible meanings/uses.
Let me explain. Here are two partial lines from COCA (query terms: not to mention):
1) … He would talk only if I promised nottomention he lived in …
2) … But tours and marketing materials, nottomention data on the average student, won’t tell you if that college will …
In the first line, I, personally, would probably phrase that as “to not mention”, though not necessarily. The point is that both constructions feel natural to me. However, I can’t imagine myself saying that about the second line. To me, and the way I’m processing these constructions, the first line’s meaning is straightforward, but the second line’s meaning is based on my understanding of “not to mention” as a fixed or partially fixed expression in this instance.
In this case, the construction is not simply negating the mentioning of something (in fact, the thing in question is explicitly and necessarily mentioned/understood). Indeed, the online Cambridge Dictionary, for example, defines “not to mention” as a phrase used when you want to emphasize something that you are adding to alist.
So, generally speaking, I process “to not VERB” as basically interchangeable with “not to VERB” (with a personal preference for “to not VERB”) when the meaning is straightforward (i.e. negating the verb). But “not to VERB”, perhaps because of it’s associations with certain fixed expressions, seems to me to have a broader range of usage. Something like this:
“not to VERB”: can negate the verb or have idiomatic/figurative meaning and usage
All the “to not VERB” uses here have meanings that can be understood as simply negating the verb. I suspect it would be this way throughout all the lines.
At least, until the language changes some more 😉
If I have said something glaringly, obviously wrong please tell me. Or if you have evidence of “to not VERB” used in an idiomatic/figurative way, please share it. Or if you have better choices for terminology etc. etc. etc. …
This might be tl;dr … If you are just looking for a list or some links to parallel corpora, please go to the end of this post.
In response to my presentation at this years ETJ Tokyo conference, where I talked about the parallel corpus and DDL tool called SCoRE, I was asked whether there were parallel corpora available in other languages. Short answer: Yes! Caveat: They are not always straightforward to use.
First of all, a quick explanation of what is a parallel corpus. It is a kind of bi- or multilingual corpus. A parallel corpus is a corpus that contains text from one language that is aligned with translations from one or more other languages; so, for example, if I query talk * in the Japanese interface of SCoRE I will get concordance lines in English that contain talk+ any wordand concordance lines in Japanese that are translations of those English lines. These are parallel concordances.
Here is another illustration showing a sample of a concordance from the Portugese-English parallel corpus COMPARA. The query terms were “talk” “.*” (this is the syntax for the talk + any word search in COMPARA, quote marks included).
Parallel concordancing can used for activities like translation tasks, of course, but they are also useful for DDL, at least in certain situations. In my experience, having translations of English concordance lines available in students’ L1 is very helpful for both lower-proficiency students and novice DDL students. Both the content and format of concordance lines can be difficult for such students, but in both cases the L1 support offered by parallel corpora allows students to quickly grasp the meaning of the English lines, letting them focus on the context or patterns in the lines. Even if they don’t always need the L1 support to really understand the English lines, they often feel more comfortable and are more receptive to doing activities and work that they are generally unaccustomed to doing. Perhaps as they become more familiar with concordance lines they can switch to monolingual lines.
Another benefit is that they can get a sense of how differently (or similarly) concepts, ideas, or notions may be expressed in the L2 as compared to their L1. Students can pick up on shades of meaning, nuance, and usage. I’ve seen this lead to lexical development where students have commented that they found a phrase or new (and natural-sounding) way to express something they had previously expressed inaccurately due to L1 interference, or had been completely unaware of because it wasn’t covered in any traditional way (i.e. it really is something they discovered for themselves). It’s only anecdotal, but I have spoken with my students about these mini ‘light-bulb’ moments and they react very positively to them.
There can be issues, though. There needs to be some understanding of, say, the directionality and relationship of the source material to the translations, or where the translations have come from and their quality, and of course that the translation seen in a concordance line is almost certainly not the only potential/accurate way to translate the source text. And another thing to keep in mind is that students’ need to share a single L1 unless the corpus is multilingual with translations available for all of the students’ L1s (which would overcome one issue but possibly raise others).
But still, parallel concordances can be quite useful and make it easier for students to get involved in doing DDL work. For more info about uses and issues with parallel corpora/concordances I recommend reading ‘Frankenberg-Garcia, A. (2005). Pedagogical uses of monolingual and parallel concordances. ELT Journal, 59(3), 189-198.’
Finally, where are these parallel corpora? A simple google search will turn up numerous parallel corpora available for download, such as the Open Parallel Corpus (OPUS), but that means you need to run your own parallel concordancing software. Something like AntPConc might be a relatively easy-to-use piece of software for this. However, even if you are comfortable running an application like AntPConc, the parallel corpora you find might not be appropriate for your students unless you are in an ESP environment with students learning language for, say, international legal or technical contexts (like the EuroParl corpus).
Alternatively, I’ve compiled a very brief list of some parallel corpora and projects that have web-based interfaces. A caution, though, I am familiar only with the English-Japanese corpora on this list; although some of the others have been used for language learning, or designed with language learning as a goal, I cannot vouch for the pedagogic applicability or accuracy of the other language combinations here (I’ll leave that to folks who understand the languages in these corpora).
And so ends CorpusMOOC 2016. Week 8’s lecture content focused on ‘bad language’: swears, insults, and other sorts of uncouth language. The build-up to week 8 didn’t quite match the actual event imo, but in a good way. It seemed to me that ‘bad language’ week was in-a-way marketed as a wild, fun, here-we-go kind of thing.
And it was fun! But not in a wild way. It was actually quite a sober analysis of the use of ‘bad language’, looked at from a variety of angles and variables such as age, class, and sex. That’s what made it fun: it was really dissecting the language, trying to understand it, contemplating it, and not just reveling in getting to use no-no words.
The practical activities wrapped up the CQPweb tutorials. I haven’t written very much about the practical activities, but that’s because I think they speak for themselves. They are PRACTICAL. If your new to CL, they will undoubtedly be helpful. Even if you’re not new, there’s probably something new, some way of searching a corpus that you didn’t know about before, or that you perhaps underutilize, and these activities are a good refresher.
I believe this course is a confidence builder, more than anything else. You come away thinking that even if I can’t do too sophisticated corpus work yet, I know my way around and how to begin doing meaningful things, and I know some of the range of work that can be done with corpora.