Spoken BNC2014 & EFCAMDAT

Two large, open-access* corpora have recently been released.

Go here to learn about and get access to the Spoken BNC2014, and go here to learn about and get access to the EF-Cambridge Open Language Database (EFCAMDAT). EFCAMDAT is a learner corpus featuring essays from adult learners of English around the world, and this is its second, bigger release. Spoken BNC2014 is brand new (to the public), and, as you may very well be aware, is being described as “the largest ever public collection of transcribed British conversations“.

I don’t have anything too special to say. These both look like terrific resources. I think all who are interested should check them out.


*with registration

Advertisements

Another SkELL Update

SkELL has another new feature.

In the WordSketch function, there is now a button that provides more context for the words co-occuring with the target word/lemma.

Automatic PoS tagging in the corpus sometimes results in errors, and this feature is meant to help with this problem. It doesn’t really prevent the errors, but it should help users make correct identifications despite tagging errors.

For more info, see this post from the FB CorpusCall group and play around with this example page from SkELL.

Parallel corpora and concordances

This might be tl;dr … If you are just looking for a list or some links to parallel corpora, please go to the end of this post.


In response to my presentation at this years ETJ Tokyo conference, where I talked about the parallel corpus and DDL tool called SCoRE, I was asked whether there were parallel corpora available in other languages. Short answer: Yes! Caveat: They are not always straightforward to use.

First of all, a quick explanation of what is a parallel corpus. It is a kind of bi- or multilingual corpus. A parallel corpus is a corpus that contains text from one language that is aligned with translations from one or more other languages; so, for example, if I query talk * in the Japanese interface of SCoRE I will get concordance lines in English that contain talk+ any word and concordance lines in Japanese that are translations of those English lines. These are parallel concordances.

parlines
English-Japanese parallel concordancing in SCoRE

Here is another illustration  showing a sample of a concordance from the Portugese-English parallel corpus COMPARA. The query terms were “talk” “.*” (this is the syntax for the talk + any word search in COMPARA, quote marks included).

comparalines
Parallel concordancing in COMPARA

Parallel corpora are often used in translation and contrastive studies [see McEnery, A., & Xiao, R. (2007). Parallel and comparable corpora: What is happening. Incorporating Corpora. The Linguist and the Translator, 18-31]. Although they are not used as much in language learning, there has been promising work recently, particularly (as far as I’m aware) here in Japan [see Anthony, L., Chujo, K., & Oghigian, K. (2011). A novel, web-based, parallel concordancer for use in the ESL/EFL classroom. In Corpus-based studies in language use, language learning, and language documentation (pp. 123-138). Brill.; see also Chujo, K., Kobayashi, Y., Mizumoto, A., & Oghigian, K. (2016). Exploring the Effectiveness of Combined Web-based Corpus Tools for Beginner EFL DDL. Linguistics and Literature Studies, 4(4).]

Parallel concordancing can used for activities like translation tasks, of course, but they are also useful for DDL, at least in certain situations. In my experience, having translations of English concordance lines available in students’ L1 is very helpful for both lower-proficiency students and novice DDL students. Both the content and format of concordance lines can be difficult for such students, but in both cases the L1 support offered by parallel corpora allows students to quickly grasp the meaning of the English lines, letting them focus on the context or patterns in the lines. Even if they don’t always need the L1 support to really understand the English lines, they often feel more comfortable and are more receptive to doing activities and work that they are generally unaccustomed to doing. Perhaps as they become more familiar with concordance lines they can switch to monolingual lines.

Another benefit is that they can get a sense of how differently (or similarly) concepts, ideas, or notions may be expressed in the L2 as compared to their L1. Students can pick up on shades of meaning, nuance, and usage. I’ve seen this lead to lexical development where students have commented that they found a phrase or new (and natural-sounding) way to express something they had previously expressed inaccurately due to L1 interference, or had been completely unaware of because it wasn’t covered in any traditional way (i.e. it really is something they discovered for themselves). It’s only anecdotal, but I have spoken with my students about these mini ‘light-bulb’ moments and they react very positively to them.

There can be issues, though. There needs to be some understanding of, say, the directionality and relationship of the source material to the translations, or where the translations have come from and their quality, and of course that the translation seen in a concordance line is almost certainly not the only potential/accurate way to translate the source text. And another thing to keep in mind is that students’ need to share a single L1 unless the corpus is multilingual with translations available for all of the students’ L1s (which would overcome one issue but possibly raise others).

But still, parallel concordances can be quite useful and make it easier for students to get involved in doing DDL work. For more info about uses and issues with parallel corpora/concordances I recommend reading ‘Frankenberg-Garcia, A. (2005). Pedagogical uses of monolingual and parallel concordances. ELT Journal, 59(3), 189-198.’


Finally, where are these parallel corpora? A simple google search will turn up numerous parallel corpora available for download, such as the Open Parallel Corpus (OPUS), but that means you need to run your own parallel concordancing software. Something like AntPConc might be a relatively easy-to-use piece of software for this. However, even if you are comfortable running an application like AntPConc, the parallel corpora you find might not be appropriate for your students unless you are in an ESP environment with students learning language for, say, international legal or technical contexts (like the EuroParl corpus).

Alternatively, I’ve compiled a very brief list of some parallel corpora and projects that have web-based interfaces. A caution, though, I am familiar only with the English-Japanese corpora on this list; although some of the others have been used for language learning, or designed with language learning as a goal, I cannot vouch for the pedagogic applicability or accuracy of the other language combinations here (I’ll leave that to folks who understand the languages in these corpora).

Note: All of these are combined with English

Japanese

SCoRE (and WebSCoRE); WebParaNews

Chinese

E-C Concord (more info can be found here); BFSU CQPweb has several parallel corpora (for guests the user ID and password are both test)

Korean

MOA

Thai

ETPC

Polish

PACO for EPPC

Portugese

COMPARA (further information about COMPARA is available here)

Multilingual

Tatoeba; Linguee; Reverso Context

Learner Language

ENEJE (this parallel corpus aligns essays by Japanese EFL students with edits made by native English speakers)


I’m sure there are many more. Feel free to list others in the comments 🙂

 

Not just for reference: dictionaries and corpora

These are the slides for a presentation/workshop I am giving at the ETJ Expo (details) in Tokyo this weekend. I am talking about some of the uses of learner dictionaries and English/Japanese parallel corpora like SCoRE and WebParaNews.

Handout: etjhandout-2

Note: Slide 4 erroneously links to ‘oven’, not ‘stove’, from the monolingual dictionary link.

CorpusMOOC Weeks 4-5

Week 4

Two main things were going on in week 4: A) Lecture and activities for annotation and tagging, and B) Reviewing essays that used corpus analyses.

Annotation comes up a lot in CL readings and discussions, and I would imagine it seems extremely technical when you only read or hear about it. The practical activities/videos using CLAWS, TagAnt, and USAS do a lot to demystify the tagging process. I was particularly interested in USAS as I have not used a semantic tagger before.

Reading and reviewing undergraduate essays was also a nice activity because in a few ways it helped to demonstrate what one should look for when doing analyses, as well as stimulating thinking about how info and data should be presented. It was useful to me as a reflective assessment.

Week 5

In week 5 the main topics were, broadly, using CL to look at social issues and introducing CQPweb, which, if you’ve ever tried to use it before, has an interface that can be difficult to come to grips with (i.e. it’s a bit intimidating imo). Having a walk-thru like this really helps because CQPweb is a really powerful tool, and it allows you to access a lot of corpora (not only English).

The exploration of CL with forensic linguistics is also quite interesting, and is a nice example of CL being used to examine the context and setting of language use and not just a focus on grammatical or internal characteristics of language, but how those characteristics can inform our understanding of what is going in the case. And in turn how that understanding can affect and influence decide-making in the world.

CorpusMOOC Week 2

A few thoughts for week 2:

  • I still think there is a lot of stuff, coming pretty fast, that might be overwhelming for true beginners. It’s good that folks can go back and review things (or skip/hold something for later).
  • After last week’s vocabulary and concept building, I appreciated the more practical bent of this week’s material. Especially, the discussions of language change in the lecture and main reading were very clear and stimulating. Also, a lot of the focus this week was on collocation, which, of course, is so fundamental to a lot of corpus work.
  • Part of what I think was good about the lecture and reading was the introduction of statistical measurements such as Mutual Information and log-likelihood scores. Statistical work can be daunting, but I was happy to see/read clear, if brief, descriptions of how such scores are used. This is just a personal theory (at this point), but I think the reticence to bother with corpora for a lot of folks is based on unfamiliarity with or lack of understanding of statistics.
  • In addition to further tutorials for AntConc, GraphColl was introduced. I had toyed around with GraphColl a little bit before, but this intro clarified quite a few things. The #Lancsbox package, of which GraphColl is a part, is able to do a number of customized statistical measurements on a corpus. The GraphColl feature allows the user to visualize collocations rather than just reading them in a list or table. On top of that, the visualization feature lets you see collocational relationships that maybe removed by a number of degrees. For example, A collocates with B, and B collocates with C, but what about A and C? GraphColl lets the user see how A and C may be connected through B. This is termed a ‘collocation network’. The #Lancsbox website has a citation paper that explains collocation networks in more depth.

Looking forward to week 3 🙂

**Update**

I highly recommend not skipping the video with Michael Hoey where he discusses priming and collocation in the mind. There are some very interesting anecdotes and concepts discussed that would be great, imo, to look into further.

 

SCoRE Corpus Update

The Sentence Corpus of Remedial English was updated and moved to a new website earlier this month: http://www.score-corpus.org. After following the link, there is a button in the top right corner of the page to switch to English.

The tools (Pattern Browser, Concordancer, Quiz Generator) seem to be the same, but it appears the corpus itself has expanded. The new website is much more comprehensive in the information it provides, including pages for things like SCoRE-related publications and user guides in both Japanese and English (there used to only be a Japanese language user guide). There is even a page where users can submit example sentences that may be included in future updates of the corpus.

Very good to see this project developing and continuing.